00:00:00.001 Started by upstream project "autotest-per-patch" build number 121253 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.152 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.169 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.181 Checking out Revision e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f (FETCH_HEAD) 00:00:05.181 > git config core.sparsecheckout # timeout=10 00:00:05.193 > git read-tree -mu HEAD # timeout=10 00:00:05.211 > git checkout -f e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f # timeout=5 00:00:05.230 Commit message: "jenkins/reset: add APC-C14 and APC-C18" 00:00:05.230 > git rev-list --no-walk e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f # timeout=10 00:00:05.322 [Pipeline] Start of Pipeline 00:00:05.338 [Pipeline] library 00:00:05.339 Loading library shm_lib@master 00:00:05.340 Library shm_lib@master is cached. Copying from home. 00:00:05.357 [Pipeline] node 00:00:20.359 Still waiting to schedule task 00:00:20.360 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:01.420 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:01.422 [Pipeline] { 00:12:01.435 [Pipeline] catchError 00:12:01.436 [Pipeline] { 00:12:01.455 [Pipeline] wrap 00:12:01.467 [Pipeline] { 00:12:01.477 [Pipeline] stage 00:12:01.481 [Pipeline] { (Prologue) 00:12:01.506 [Pipeline] echo 00:12:01.507 Node: VM-host-SM0 00:12:01.515 [Pipeline] cleanWs 00:12:01.525 [WS-CLEANUP] Deleting project workspace... 00:12:01.525 [WS-CLEANUP] Deferred wipeout is used... 00:12:01.530 [WS-CLEANUP] done 00:12:01.704 [Pipeline] setCustomBuildProperty 00:12:01.781 [Pipeline] nodesByLabel 00:12:01.783 Found a total of 1 nodes with the 'sorcerer' label 00:12:01.793 [Pipeline] httpRequest 00:12:01.798 HttpMethod: GET 00:12:01.798 URL: http://10.211.164.96/packages/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:12:01.799 Sending request to url: http://10.211.164.96/packages/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:12:01.801 Response Code: HTTP/1.1 200 OK 00:12:01.801 Success: Status code 200 is in the accepted range: 200,404 00:12:01.802 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:12:01.940 [Pipeline] sh 00:12:02.220 + tar --no-same-owner -xf jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:12:02.241 [Pipeline] httpRequest 00:12:02.245 HttpMethod: GET 00:12:02.246 URL: http://10.211.164.96/packages/spdk_e29339c01a5cf6bb5c14d857ddc961139254746f.tar.gz 00:12:02.247 Sending request to url: http://10.211.164.96/packages/spdk_e29339c01a5cf6bb5c14d857ddc961139254746f.tar.gz 00:12:02.247 Response Code: HTTP/1.1 200 OK 00:12:02.248 Success: Status code 200 is in the accepted range: 200,404 00:12:02.248 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e29339c01a5cf6bb5c14d857ddc961139254746f.tar.gz 00:12:05.471 [Pipeline] sh 00:12:05.751 + tar --no-same-owner -xf spdk_e29339c01a5cf6bb5c14d857ddc961139254746f.tar.gz 00:12:09.083 [Pipeline] sh 00:12:09.362 + git -C spdk log --oneline -n5 00:12:09.362 e29339c01 [TEST] release claimed base bdevs for raids in configuring state 00:12:09.362 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:12:09.362 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:12:09.362 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:12:09.362 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:12:09.381 [Pipeline] writeFile 00:12:09.394 [Pipeline] sh 00:12:09.671 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:09.684 [Pipeline] sh 00:12:09.973 + cat autorun-spdk.conf 00:12:09.973 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:09.973 SPDK_TEST_NVMF=1 00:12:09.973 SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:09.973 SPDK_TEST_URING=1 00:12:09.973 SPDK_TEST_USDT=1 00:12:09.973 SPDK_RUN_UBSAN=1 00:12:09.973 NET_TYPE=virt 00:12:09.973 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:09.988 RUN_NIGHTLY=0 00:12:09.990 [Pipeline] } 00:12:10.019 [Pipeline] // stage 00:12:10.038 [Pipeline] stage 00:12:10.040 [Pipeline] { (Run VM) 00:12:10.065 [Pipeline] sh 00:12:10.364 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:10.364 + echo 'Start stage prepare_nvme.sh' 00:12:10.364 Start stage prepare_nvme.sh 00:12:10.364 + [[ -n 4 ]] 00:12:10.364 + disk_prefix=ex4 00:12:10.364 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:12:10.365 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:12:10.365 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:12:10.365 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:10.365 ++ SPDK_TEST_NVMF=1 00:12:10.365 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:10.365 ++ SPDK_TEST_URING=1 00:12:10.365 ++ SPDK_TEST_USDT=1 00:12:10.365 ++ SPDK_RUN_UBSAN=1 00:12:10.365 ++ NET_TYPE=virt 00:12:10.365 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:10.365 ++ RUN_NIGHTLY=0 00:12:10.365 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:10.365 + nvme_files=() 00:12:10.365 + declare -A nvme_files 00:12:10.365 + backend_dir=/var/lib/libvirt/images/backends 00:12:10.365 + nvme_files['nvme.img']=5G 00:12:10.365 + nvme_files['nvme-cmb.img']=5G 00:12:10.365 + nvme_files['nvme-multi0.img']=4G 00:12:10.365 + nvme_files['nvme-multi1.img']=4G 00:12:10.365 + nvme_files['nvme-multi2.img']=4G 00:12:10.365 + nvme_files['nvme-openstack.img']=8G 00:12:10.365 + nvme_files['nvme-zns.img']=5G 00:12:10.365 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:10.365 + (( SPDK_TEST_FTL == 1 )) 00:12:10.365 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:10.365 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:10.365 + for nvme in "${!nvme_files[@]}" 00:12:10.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:12:10.365 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:10.365 + for nvme in "${!nvme_files[@]}" 00:12:10.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:12:10.365 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:10.365 + for nvme in "${!nvme_files[@]}" 00:12:10.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:12:10.623 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:10.623 + for nvme in "${!nvme_files[@]}" 00:12:10.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:12:10.623 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:10.623 + for nvme in "${!nvme_files[@]}" 00:12:10.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:12:10.623 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:10.623 + for nvme in "${!nvme_files[@]}" 00:12:10.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:12:10.623 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:10.623 + for nvme in "${!nvme_files[@]}" 00:12:10.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:12:10.623 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:10.623 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:12:10.623 + echo 'End stage prepare_nvme.sh' 00:12:10.623 End stage prepare_nvme.sh 00:12:10.635 [Pipeline] sh 00:12:10.915 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:10.915 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:12:10.915 00:12:10.915 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:12:10.915 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:12:10.915 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:10.915 HELP=0 00:12:10.915 DRY_RUN=0 00:12:10.915 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:12:10.915 NVME_DISKS_TYPE=nvme,nvme, 00:12:10.915 NVME_AUTO_CREATE=0 00:12:10.915 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:12:10.915 NVME_CMB=,, 00:12:10.915 NVME_PMR=,, 00:12:10.915 NVME_ZNS=,, 00:12:10.915 NVME_MS=,, 00:12:10.915 NVME_FDP=,, 00:12:10.915 SPDK_VAGRANT_DISTRO=fedora38 00:12:10.915 SPDK_VAGRANT_VMCPU=10 00:12:10.915 SPDK_VAGRANT_VMRAM=12288 00:12:10.915 SPDK_VAGRANT_PROVIDER=libvirt 00:12:10.915 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:12:10.915 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:10.915 SPDK_OPENSTACK_NETWORK=0 00:12:10.915 VAGRANT_PACKAGE_BOX=0 00:12:10.915 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:12:10.915 FORCE_DISTRO=true 00:12:10.915 VAGRANT_BOX_VERSION= 00:12:10.915 EXTRA_VAGRANTFILES= 00:12:10.915 NIC_MODEL=e1000 00:12:10.915 00:12:10.915 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:12:10.915 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:14.203 Bringing machine 'default' up with 'libvirt' provider... 00:12:15.162 ==> default: Creating image (snapshot of base box volume). 00:12:15.162 ==> default: Creating domain with the following settings... 00:12:15.162 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714133347_e80fa3f95f49a09bd362 00:12:15.162 ==> default: -- Domain type: kvm 00:12:15.162 ==> default: -- Cpus: 10 00:12:15.162 ==> default: -- Feature: acpi 00:12:15.162 ==> default: -- Feature: apic 00:12:15.162 ==> default: -- Feature: pae 00:12:15.162 ==> default: -- Memory: 12288M 00:12:15.162 ==> default: -- Memory Backing: hugepages: 00:12:15.162 ==> default: -- Management MAC: 00:12:15.162 ==> default: -- Loader: 00:12:15.162 ==> default: -- Nvram: 00:12:15.162 ==> default: -- Base box: spdk/fedora38 00:12:15.162 ==> default: -- Storage pool: default 00:12:15.162 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714133347_e80fa3f95f49a09bd362.img (20G) 00:12:15.162 ==> default: -- Volume Cache: default 00:12:15.162 ==> default: -- Kernel: 00:12:15.162 ==> default: -- Initrd: 00:12:15.162 ==> default: -- Graphics Type: vnc 00:12:15.162 ==> default: -- Graphics Port: -1 00:12:15.162 ==> default: -- Graphics IP: 127.0.0.1 00:12:15.162 ==> default: -- Graphics Password: Not defined 00:12:15.162 ==> default: -- Video Type: cirrus 00:12:15.162 ==> default: -- Video VRAM: 9216 00:12:15.162 ==> default: -- Sound Type: 00:12:15.162 ==> default: -- Keymap: en-us 00:12:15.162 ==> default: -- TPM Path: 00:12:15.162 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:15.162 ==> default: -- Command line args: 00:12:15.162 ==> default: -> value=-device, 00:12:15.162 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:15.162 ==> default: -> value=-drive, 00:12:15.162 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:12:15.421 ==> default: -> value=-device, 00:12:15.421 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:15.421 ==> default: -> value=-device, 00:12:15.421 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:12:15.421 ==> default: -> value=-drive, 00:12:15.421 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:12:15.421 ==> default: -> value=-device, 00:12:15.421 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:15.421 ==> default: -> value=-drive, 00:12:15.421 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:12:15.421 ==> default: -> value=-device, 00:12:15.421 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:15.421 ==> default: -> value=-drive, 00:12:15.421 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:12:15.421 ==> default: -> value=-device, 00:12:15.421 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:15.679 ==> default: Creating shared folders metadata... 00:12:15.679 ==> default: Starting domain. 00:12:17.579 ==> default: Waiting for domain to get an IP address... 00:12:35.656 ==> default: Waiting for SSH to become available... 00:12:35.656 ==> default: Configuring and enabling network interfaces... 00:12:39.846 default: SSH address: 192.168.121.66:22 00:12:39.846 default: SSH username: vagrant 00:12:39.846 default: SSH auth method: private key 00:12:41.773 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:12:49.955 ==> default: Mounting SSHFS shared folder... 00:12:50.889 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:12:50.889 ==> default: Checking Mount.. 00:12:51.823 ==> default: Folder Successfully Mounted! 00:12:51.823 ==> default: Running provisioner: file... 00:12:52.758 default: ~/.gitconfig => .gitconfig 00:12:53.017 00:12:53.017 SUCCESS! 00:12:53.017 00:12:53.017 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:12:53.017 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:12:53.017 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:12:53.017 00:12:53.026 [Pipeline] } 00:12:53.044 [Pipeline] // stage 00:12:53.054 [Pipeline] dir 00:12:53.055 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:12:53.057 [Pipeline] { 00:12:53.069 [Pipeline] catchError 00:12:53.070 [Pipeline] { 00:12:53.082 [Pipeline] sh 00:12:53.359 + vagrant ssh-config --host vagrant 00:12:53.359 + sed -ne /^Host/,$p 00:12:53.359 + tee ssh_conf 00:12:57.552 Host vagrant 00:12:57.552 HostName 192.168.121.66 00:12:57.552 User vagrant 00:12:57.552 Port 22 00:12:57.552 UserKnownHostsFile /dev/null 00:12:57.552 StrictHostKeyChecking no 00:12:57.552 PasswordAuthentication no 00:12:57.552 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:12:57.552 IdentitiesOnly yes 00:12:57.552 LogLevel FATAL 00:12:57.552 ForwardAgent yes 00:12:57.552 ForwardX11 yes 00:12:57.552 00:12:57.580 [Pipeline] withEnv 00:12:57.583 [Pipeline] { 00:12:57.599 [Pipeline] sh 00:12:57.877 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:12:57.877 source /etc/os-release 00:12:57.877 [[ -e /image.version ]] && img=$(< /image.version) 00:12:57.877 # Minimal, systemd-like check. 00:12:57.877 if [[ -e /.dockerenv ]]; then 00:12:57.877 # Clear garbage from the node's name: 00:12:57.877 # agt-er_autotest_547-896 -> autotest_547-896 00:12:57.877 # $HOSTNAME is the actual container id 00:12:57.877 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:57.877 if mountpoint -q /etc/hostname; then 00:12:57.877 # We can assume this is a mount from a host where container is running, 00:12:57.877 # so fetch its hostname to easily identify the target swarm worker. 00:12:57.877 container="$(< /etc/hostname) ($agent)" 00:12:57.877 else 00:12:57.877 # Fallback 00:12:57.877 container=$agent 00:12:57.877 fi 00:12:57.877 fi 00:12:57.877 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:57.877 00:12:57.888 [Pipeline] } 00:12:57.909 [Pipeline] // withEnv 00:12:57.917 [Pipeline] setCustomBuildProperty 00:12:57.932 [Pipeline] stage 00:12:57.934 [Pipeline] { (Tests) 00:12:57.949 [Pipeline] sh 00:12:58.223 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:58.497 [Pipeline] timeout 00:12:58.497 Timeout set to expire in 30 min 00:12:58.499 [Pipeline] { 00:12:58.516 [Pipeline] sh 00:12:58.795 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:12:59.361 HEAD is now at e29339c01 [TEST] release claimed base bdevs for raids in configuring state 00:12:59.374 [Pipeline] sh 00:12:59.660 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:12:59.953 [Pipeline] sh 00:13:00.232 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:00.506 [Pipeline] sh 00:13:00.785 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:13:01.043 ++ readlink -f spdk_repo 00:13:01.043 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:01.043 + [[ -n /home/vagrant/spdk_repo ]] 00:13:01.043 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:01.043 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:01.043 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:01.043 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:01.043 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:01.043 + cd /home/vagrant/spdk_repo 00:13:01.043 + source /etc/os-release 00:13:01.043 ++ NAME='Fedora Linux' 00:13:01.043 ++ VERSION='38 (Cloud Edition)' 00:13:01.043 ++ ID=fedora 00:13:01.043 ++ VERSION_ID=38 00:13:01.043 ++ VERSION_CODENAME= 00:13:01.043 ++ PLATFORM_ID=platform:f38 00:13:01.043 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:13:01.043 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:01.043 ++ LOGO=fedora-logo-icon 00:13:01.043 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:13:01.043 ++ HOME_URL=https://fedoraproject.org/ 00:13:01.043 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:13:01.043 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:01.043 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:01.043 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:01.043 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:13:01.043 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:01.043 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:13:01.043 ++ SUPPORT_END=2024-05-14 00:13:01.043 ++ VARIANT='Cloud Edition' 00:13:01.043 ++ VARIANT_ID=cloud 00:13:01.043 + uname -a 00:13:01.043 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:13:01.043 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:01.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:01.627 Hugepages 00:13:01.627 node hugesize free / total 00:13:01.627 node0 1048576kB 0 / 0 00:13:01.627 node0 2048kB 0 / 0 00:13:01.627 00:13:01.627 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:01.627 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:01.627 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:13:01.627 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:13:01.627 + rm -f /tmp/spdk-ld-path 00:13:01.627 + source autorun-spdk.conf 00:13:01.627 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:01.627 ++ SPDK_TEST_NVMF=1 00:13:01.627 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:01.627 ++ SPDK_TEST_URING=1 00:13:01.627 ++ SPDK_TEST_USDT=1 00:13:01.627 ++ SPDK_RUN_UBSAN=1 00:13:01.627 ++ NET_TYPE=virt 00:13:01.627 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:01.627 ++ RUN_NIGHTLY=0 00:13:01.627 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:01.627 + [[ -n '' ]] 00:13:01.627 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:01.627 + for M in /var/spdk/build-*-manifest.txt 00:13:01.627 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:01.627 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:01.627 + for M in /var/spdk/build-*-manifest.txt 00:13:01.627 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:01.627 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:01.627 ++ uname 00:13:01.627 + [[ Linux == \L\i\n\u\x ]] 00:13:01.627 + sudo dmesg -T 00:13:01.627 + sudo dmesg --clear 00:13:01.886 + dmesg_pid=5166 00:13:01.886 + [[ Fedora Linux == FreeBSD ]] 00:13:01.886 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.886 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.886 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:01.886 + sudo dmesg -Tw 00:13:01.886 + [[ -x /usr/src/fio-static/fio ]] 00:13:01.886 + export FIO_BIN=/usr/src/fio-static/fio 00:13:01.886 + FIO_BIN=/usr/src/fio-static/fio 00:13:01.886 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:01.886 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:01.886 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:01.886 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.886 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.886 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:01.886 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.886 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.886 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:01.886 Test configuration: 00:13:01.886 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:01.886 SPDK_TEST_NVMF=1 00:13:01.886 SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:01.886 SPDK_TEST_URING=1 00:13:01.886 SPDK_TEST_USDT=1 00:13:01.886 SPDK_RUN_UBSAN=1 00:13:01.886 NET_TYPE=virt 00:13:01.886 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:01.886 RUN_NIGHTLY=0 12:09:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.886 12:09:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:01.886 12:09:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.886 12:09:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.886 12:09:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.886 12:09:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.886 12:09:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.886 12:09:55 -- paths/export.sh@5 -- $ export PATH 00:13:01.886 12:09:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.886 12:09:55 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:01.886 12:09:55 -- common/autobuild_common.sh@435 -- $ date +%s 00:13:01.886 12:09:55 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714133395.XXXXXX 00:13:01.886 12:09:55 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714133395.kAxbga 00:13:01.886 12:09:55 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:13:01.886 12:09:55 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:13:01.886 12:09:55 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:01.886 12:09:55 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:01.886 12:09:55 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:01.886 12:09:55 -- common/autobuild_common.sh@451 -- $ get_config_params 00:13:01.886 12:09:55 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:13:01.886 12:09:55 -- common/autotest_common.sh@10 -- $ set +x 00:13:01.886 12:09:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:13:01.886 12:09:55 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:13:01.886 12:09:55 -- pm/common@17 -- $ local monitor 00:13:01.886 12:09:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:01.886 12:09:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5200 00:13:01.886 12:09:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:01.886 12:09:55 -- pm/common@21 -- $ date +%s 00:13:01.886 12:09:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5202 00:13:01.886 12:09:55 -- pm/common@26 -- $ sleep 1 00:13:01.886 12:09:55 -- pm/common@21 -- $ date +%s 00:13:01.886 12:09:55 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714133395 00:13:01.886 12:09:55 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714133395 00:13:02.145 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714133395_collect-cpu-load.pm.log 00:13:02.145 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714133395_collect-vmstat.pm.log 00:13:03.124 12:09:56 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:13:03.124 12:09:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:03.124 12:09:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:03.124 12:09:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:03.124 12:09:56 -- spdk/autobuild.sh@16 -- $ date -u 00:13:03.124 Fri Apr 26 12:09:56 PM UTC 2024 00:13:03.124 12:09:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:03.124 v24.05-pre-449-ge29339c01 00:13:03.124 12:09:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:13:03.124 12:09:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:03.125 12:09:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:03.125 12:09:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:13:03.125 12:09:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:13:03.125 12:09:56 -- common/autotest_common.sh@10 -- $ set +x 00:13:03.125 ************************************ 00:13:03.125 START TEST ubsan 00:13:03.125 ************************************ 00:13:03.125 using ubsan 00:13:03.125 12:09:56 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:13:03.125 00:13:03.125 real 0m0.001s 00:13:03.125 user 0m0.000s 00:13:03.125 sys 0m0.000s 00:13:03.125 12:09:56 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:13:03.125 12:09:56 -- common/autotest_common.sh@10 -- $ set +x 00:13:03.125 ************************************ 00:13:03.125 END TEST ubsan 00:13:03.125 ************************************ 00:13:03.125 12:09:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:03.125 12:09:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:03.125 12:09:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:03.125 12:09:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:03.125 12:09:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:03.125 12:09:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:13:03.125 12:09:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:13:03.125 12:09:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:13:03.125 12:09:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:13:03.125 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:03.125 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:03.692 Using 'verbs' RDMA provider 00:13:16.832 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:13:31.709 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:13:31.709 Creating mk/config.mk...done. 00:13:31.709 Creating mk/cc.flags.mk...done. 00:13:31.709 Type 'make' to build. 00:13:31.709 12:10:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:13:31.709 12:10:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:13:31.709 12:10:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:13:31.709 12:10:23 -- common/autotest_common.sh@10 -- $ set +x 00:13:31.709 ************************************ 00:13:31.709 START TEST make 00:13:31.709 ************************************ 00:13:31.709 12:10:23 -- common/autotest_common.sh@1111 -- $ make -j10 00:13:31.709 make[1]: Nothing to be done for 'all'. 00:13:41.679 The Meson build system 00:13:41.679 Version: 1.3.1 00:13:41.679 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:41.679 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:41.679 Build type: native build 00:13:41.679 Program cat found: YES (/usr/bin/cat) 00:13:41.679 Project name: DPDK 00:13:41.679 Project version: 23.11.0 00:13:41.679 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:13:41.679 C linker for the host machine: cc ld.bfd 2.39-16 00:13:41.679 Host machine cpu family: x86_64 00:13:41.679 Host machine cpu: x86_64 00:13:41.679 Message: ## Building in Developer Mode ## 00:13:41.679 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:41.679 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:41.679 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:41.679 Program python3 found: YES (/usr/bin/python3) 00:13:41.679 Program cat found: YES (/usr/bin/cat) 00:13:41.679 Compiler for C supports arguments -march=native: YES 00:13:41.679 Checking for size of "void *" : 8 00:13:41.679 Checking for size of "void *" : 8 (cached) 00:13:41.679 Library m found: YES 00:13:41.679 Library numa found: YES 00:13:41.679 Has header "numaif.h" : YES 00:13:41.679 Library fdt found: NO 00:13:41.679 Library execinfo found: NO 00:13:41.679 Has header "execinfo.h" : YES 00:13:41.679 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:13:41.679 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:41.679 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:41.679 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:41.679 Run-time dependency openssl found: YES 3.0.9 00:13:41.679 Run-time dependency libpcap found: YES 1.10.4 00:13:41.679 Has header "pcap.h" with dependency libpcap: YES 00:13:41.679 Compiler for C supports arguments -Wcast-qual: YES 00:13:41.679 Compiler for C supports arguments -Wdeprecated: YES 00:13:41.679 Compiler for C supports arguments -Wformat: YES 00:13:41.679 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:41.679 Compiler for C supports arguments -Wformat-security: NO 00:13:41.679 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:41.679 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:41.679 Compiler for C supports arguments -Wnested-externs: YES 00:13:41.679 Compiler for C supports arguments -Wold-style-definition: YES 00:13:41.679 Compiler for C supports arguments -Wpointer-arith: YES 00:13:41.679 Compiler for C supports arguments -Wsign-compare: YES 00:13:41.679 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:41.679 Compiler for C supports arguments -Wundef: YES 00:13:41.679 Compiler for C supports arguments -Wwrite-strings: YES 00:13:41.679 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:41.679 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:41.679 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:41.679 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:41.679 Program objdump found: YES (/usr/bin/objdump) 00:13:41.679 Compiler for C supports arguments -mavx512f: YES 00:13:41.679 Checking if "AVX512 checking" compiles: YES 00:13:41.679 Fetching value of define "__SSE4_2__" : 1 00:13:41.679 Fetching value of define "__AES__" : 1 00:13:41.679 Fetching value of define "__AVX__" : 1 00:13:41.679 Fetching value of define "__AVX2__" : 1 00:13:41.679 Fetching value of define "__AVX512BW__" : (undefined) 00:13:41.679 Fetching value of define "__AVX512CD__" : (undefined) 00:13:41.679 Fetching value of define "__AVX512DQ__" : (undefined) 00:13:41.679 Fetching value of define "__AVX512F__" : (undefined) 00:13:41.680 Fetching value of define "__AVX512VL__" : (undefined) 00:13:41.680 Fetching value of define "__PCLMUL__" : 1 00:13:41.680 Fetching value of define "__RDRND__" : 1 00:13:41.680 Fetching value of define "__RDSEED__" : 1 00:13:41.680 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:41.680 Fetching value of define "__znver1__" : (undefined) 00:13:41.680 Fetching value of define "__znver2__" : (undefined) 00:13:41.680 Fetching value of define "__znver3__" : (undefined) 00:13:41.680 Fetching value of define "__znver4__" : (undefined) 00:13:41.680 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:41.680 Message: lib/log: Defining dependency "log" 00:13:41.680 Message: lib/kvargs: Defining dependency "kvargs" 00:13:41.680 Message: lib/telemetry: Defining dependency "telemetry" 00:13:41.680 Checking for function "getentropy" : NO 00:13:41.680 Message: lib/eal: Defining dependency "eal" 00:13:41.680 Message: lib/ring: Defining dependency "ring" 00:13:41.680 Message: lib/rcu: Defining dependency "rcu" 00:13:41.680 Message: lib/mempool: Defining dependency "mempool" 00:13:41.680 Message: lib/mbuf: Defining dependency "mbuf" 00:13:41.680 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:41.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:41.680 Compiler for C supports arguments -mpclmul: YES 00:13:41.680 Compiler for C supports arguments -maes: YES 00:13:41.680 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:41.680 Compiler for C supports arguments -mavx512bw: YES 00:13:41.680 Compiler for C supports arguments -mavx512dq: YES 00:13:41.680 Compiler for C supports arguments -mavx512vl: YES 00:13:41.680 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:41.680 Compiler for C supports arguments -mavx2: YES 00:13:41.680 Compiler for C supports arguments -mavx: YES 00:13:41.680 Message: lib/net: Defining dependency "net" 00:13:41.680 Message: lib/meter: Defining dependency "meter" 00:13:41.680 Message: lib/ethdev: Defining dependency "ethdev" 00:13:41.680 Message: lib/pci: Defining dependency "pci" 00:13:41.680 Message: lib/cmdline: Defining dependency "cmdline" 00:13:41.680 Message: lib/hash: Defining dependency "hash" 00:13:41.680 Message: lib/timer: Defining dependency "timer" 00:13:41.680 Message: lib/compressdev: Defining dependency "compressdev" 00:13:41.680 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:41.680 Message: lib/dmadev: Defining dependency "dmadev" 00:13:41.680 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:41.680 Message: lib/power: Defining dependency "power" 00:13:41.680 Message: lib/reorder: Defining dependency "reorder" 00:13:41.680 Message: lib/security: Defining dependency "security" 00:13:41.680 Has header "linux/userfaultfd.h" : YES 00:13:41.680 Has header "linux/vduse.h" : YES 00:13:41.680 Message: lib/vhost: Defining dependency "vhost" 00:13:41.680 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:41.680 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:41.680 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:41.680 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:41.680 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:41.680 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:41.680 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:41.680 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:41.680 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:41.680 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:41.680 Program doxygen found: YES (/usr/bin/doxygen) 00:13:41.680 Configuring doxy-api-html.conf using configuration 00:13:41.680 Configuring doxy-api-man.conf using configuration 00:13:41.680 Program mandb found: YES (/usr/bin/mandb) 00:13:41.680 Program sphinx-build found: NO 00:13:41.680 Configuring rte_build_config.h using configuration 00:13:41.680 Message: 00:13:41.680 ================= 00:13:41.680 Applications Enabled 00:13:41.680 ================= 00:13:41.680 00:13:41.680 apps: 00:13:41.680 00:13:41.680 00:13:41.680 Message: 00:13:41.680 ================= 00:13:41.680 Libraries Enabled 00:13:41.680 ================= 00:13:41.680 00:13:41.680 libs: 00:13:41.680 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:41.680 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:41.680 cryptodev, dmadev, power, reorder, security, vhost, 00:13:41.680 00:13:41.680 Message: 00:13:41.680 =============== 00:13:41.680 Drivers Enabled 00:13:41.680 =============== 00:13:41.680 00:13:41.680 common: 00:13:41.680 00:13:41.680 bus: 00:13:41.680 pci, vdev, 00:13:41.680 mempool: 00:13:41.680 ring, 00:13:41.680 dma: 00:13:41.680 00:13:41.680 net: 00:13:41.680 00:13:41.680 crypto: 00:13:41.680 00:13:41.680 compress: 00:13:41.680 00:13:41.680 vdpa: 00:13:41.680 00:13:41.680 00:13:41.680 Message: 00:13:41.680 ================= 00:13:41.680 Content Skipped 00:13:41.680 ================= 00:13:41.680 00:13:41.680 apps: 00:13:41.680 dumpcap: explicitly disabled via build config 00:13:41.680 graph: explicitly disabled via build config 00:13:41.680 pdump: explicitly disabled via build config 00:13:41.680 proc-info: explicitly disabled via build config 00:13:41.680 test-acl: explicitly disabled via build config 00:13:41.680 test-bbdev: explicitly disabled via build config 00:13:41.680 test-cmdline: explicitly disabled via build config 00:13:41.680 test-compress-perf: explicitly disabled via build config 00:13:41.680 test-crypto-perf: explicitly disabled via build config 00:13:41.680 test-dma-perf: explicitly disabled via build config 00:13:41.680 test-eventdev: explicitly disabled via build config 00:13:41.680 test-fib: explicitly disabled via build config 00:13:41.680 test-flow-perf: explicitly disabled via build config 00:13:41.680 test-gpudev: explicitly disabled via build config 00:13:41.680 test-mldev: explicitly disabled via build config 00:13:41.680 test-pipeline: explicitly disabled via build config 00:13:41.680 test-pmd: explicitly disabled via build config 00:13:41.680 test-regex: explicitly disabled via build config 00:13:41.680 test-sad: explicitly disabled via build config 00:13:41.680 test-security-perf: explicitly disabled via build config 00:13:41.680 00:13:41.680 libs: 00:13:41.680 metrics: explicitly disabled via build config 00:13:41.680 acl: explicitly disabled via build config 00:13:41.680 bbdev: explicitly disabled via build config 00:13:41.680 bitratestats: explicitly disabled via build config 00:13:41.680 bpf: explicitly disabled via build config 00:13:41.680 cfgfile: explicitly disabled via build config 00:13:41.680 distributor: explicitly disabled via build config 00:13:41.680 efd: explicitly disabled via build config 00:13:41.680 eventdev: explicitly disabled via build config 00:13:41.681 dispatcher: explicitly disabled via build config 00:13:41.681 gpudev: explicitly disabled via build config 00:13:41.681 gro: explicitly disabled via build config 00:13:41.681 gso: explicitly disabled via build config 00:13:41.681 ip_frag: explicitly disabled via build config 00:13:41.681 jobstats: explicitly disabled via build config 00:13:41.681 latencystats: explicitly disabled via build config 00:13:41.681 lpm: explicitly disabled via build config 00:13:41.681 member: explicitly disabled via build config 00:13:41.681 pcapng: explicitly disabled via build config 00:13:41.681 rawdev: explicitly disabled via build config 00:13:41.681 regexdev: explicitly disabled via build config 00:13:41.681 mldev: explicitly disabled via build config 00:13:41.681 rib: explicitly disabled via build config 00:13:41.681 sched: explicitly disabled via build config 00:13:41.681 stack: explicitly disabled via build config 00:13:41.681 ipsec: explicitly disabled via build config 00:13:41.681 pdcp: explicitly disabled via build config 00:13:41.681 fib: explicitly disabled via build config 00:13:41.681 port: explicitly disabled via build config 00:13:41.681 pdump: explicitly disabled via build config 00:13:41.681 table: explicitly disabled via build config 00:13:41.681 pipeline: explicitly disabled via build config 00:13:41.681 graph: explicitly disabled via build config 00:13:41.681 node: explicitly disabled via build config 00:13:41.681 00:13:41.681 drivers: 00:13:41.681 common/cpt: not in enabled drivers build config 00:13:41.681 common/dpaax: not in enabled drivers build config 00:13:41.681 common/iavf: not in enabled drivers build config 00:13:41.681 common/idpf: not in enabled drivers build config 00:13:41.681 common/mvep: not in enabled drivers build config 00:13:41.681 common/octeontx: not in enabled drivers build config 00:13:41.681 bus/auxiliary: not in enabled drivers build config 00:13:41.681 bus/cdx: not in enabled drivers build config 00:13:41.681 bus/dpaa: not in enabled drivers build config 00:13:41.681 bus/fslmc: not in enabled drivers build config 00:13:41.681 bus/ifpga: not in enabled drivers build config 00:13:41.681 bus/platform: not in enabled drivers build config 00:13:41.681 bus/vmbus: not in enabled drivers build config 00:13:41.681 common/cnxk: not in enabled drivers build config 00:13:41.681 common/mlx5: not in enabled drivers build config 00:13:41.681 common/nfp: not in enabled drivers build config 00:13:41.681 common/qat: not in enabled drivers build config 00:13:41.681 common/sfc_efx: not in enabled drivers build config 00:13:41.681 mempool/bucket: not in enabled drivers build config 00:13:41.681 mempool/cnxk: not in enabled drivers build config 00:13:41.681 mempool/dpaa: not in enabled drivers build config 00:13:41.681 mempool/dpaa2: not in enabled drivers build config 00:13:41.681 mempool/octeontx: not in enabled drivers build config 00:13:41.681 mempool/stack: not in enabled drivers build config 00:13:41.681 dma/cnxk: not in enabled drivers build config 00:13:41.681 dma/dpaa: not in enabled drivers build config 00:13:41.681 dma/dpaa2: not in enabled drivers build config 00:13:41.681 dma/hisilicon: not in enabled drivers build config 00:13:41.681 dma/idxd: not in enabled drivers build config 00:13:41.681 dma/ioat: not in enabled drivers build config 00:13:41.681 dma/skeleton: not in enabled drivers build config 00:13:41.681 net/af_packet: not in enabled drivers build config 00:13:41.681 net/af_xdp: not in enabled drivers build config 00:13:41.681 net/ark: not in enabled drivers build config 00:13:41.681 net/atlantic: not in enabled drivers build config 00:13:41.681 net/avp: not in enabled drivers build config 00:13:41.681 net/axgbe: not in enabled drivers build config 00:13:41.681 net/bnx2x: not in enabled drivers build config 00:13:41.681 net/bnxt: not in enabled drivers build config 00:13:41.681 net/bonding: not in enabled drivers build config 00:13:41.681 net/cnxk: not in enabled drivers build config 00:13:41.681 net/cpfl: not in enabled drivers build config 00:13:41.681 net/cxgbe: not in enabled drivers build config 00:13:41.681 net/dpaa: not in enabled drivers build config 00:13:41.681 net/dpaa2: not in enabled drivers build config 00:13:41.681 net/e1000: not in enabled drivers build config 00:13:41.681 net/ena: not in enabled drivers build config 00:13:41.681 net/enetc: not in enabled drivers build config 00:13:41.681 net/enetfec: not in enabled drivers build config 00:13:41.681 net/enic: not in enabled drivers build config 00:13:41.681 net/failsafe: not in enabled drivers build config 00:13:41.681 net/fm10k: not in enabled drivers build config 00:13:41.681 net/gve: not in enabled drivers build config 00:13:41.681 net/hinic: not in enabled drivers build config 00:13:41.681 net/hns3: not in enabled drivers build config 00:13:41.681 net/i40e: not in enabled drivers build config 00:13:41.681 net/iavf: not in enabled drivers build config 00:13:41.681 net/ice: not in enabled drivers build config 00:13:41.681 net/idpf: not in enabled drivers build config 00:13:41.681 net/igc: not in enabled drivers build config 00:13:41.681 net/ionic: not in enabled drivers build config 00:13:41.681 net/ipn3ke: not in enabled drivers build config 00:13:41.681 net/ixgbe: not in enabled drivers build config 00:13:41.681 net/mana: not in enabled drivers build config 00:13:41.681 net/memif: not in enabled drivers build config 00:13:41.681 net/mlx4: not in enabled drivers build config 00:13:41.681 net/mlx5: not in enabled drivers build config 00:13:41.681 net/mvneta: not in enabled drivers build config 00:13:41.681 net/mvpp2: not in enabled drivers build config 00:13:41.681 net/netvsc: not in enabled drivers build config 00:13:41.681 net/nfb: not in enabled drivers build config 00:13:41.681 net/nfp: not in enabled drivers build config 00:13:41.681 net/ngbe: not in enabled drivers build config 00:13:41.681 net/null: not in enabled drivers build config 00:13:41.681 net/octeontx: not in enabled drivers build config 00:13:41.681 net/octeon_ep: not in enabled drivers build config 00:13:41.681 net/pcap: not in enabled drivers build config 00:13:41.681 net/pfe: not in enabled drivers build config 00:13:41.681 net/qede: not in enabled drivers build config 00:13:41.681 net/ring: not in enabled drivers build config 00:13:41.681 net/sfc: not in enabled drivers build config 00:13:41.681 net/softnic: not in enabled drivers build config 00:13:41.681 net/tap: not in enabled drivers build config 00:13:41.681 net/thunderx: not in enabled drivers build config 00:13:41.681 net/txgbe: not in enabled drivers build config 00:13:41.681 net/vdev_netvsc: not in enabled drivers build config 00:13:41.681 net/vhost: not in enabled drivers build config 00:13:41.681 net/virtio: not in enabled drivers build config 00:13:41.681 net/vmxnet3: not in enabled drivers build config 00:13:41.681 raw/*: missing internal dependency, "rawdev" 00:13:41.681 crypto/armv8: not in enabled drivers build config 00:13:41.681 crypto/bcmfs: not in enabled drivers build config 00:13:41.681 crypto/caam_jr: not in enabled drivers build config 00:13:41.681 crypto/ccp: not in enabled drivers build config 00:13:41.681 crypto/cnxk: not in enabled drivers build config 00:13:41.681 crypto/dpaa_sec: not in enabled drivers build config 00:13:41.681 crypto/dpaa2_sec: not in enabled drivers build config 00:13:41.681 crypto/ipsec_mb: not in enabled drivers build config 00:13:41.681 crypto/mlx5: not in enabled drivers build config 00:13:41.681 crypto/mvsam: not in enabled drivers build config 00:13:41.681 crypto/nitrox: not in enabled drivers build config 00:13:41.681 crypto/null: not in enabled drivers build config 00:13:41.681 crypto/octeontx: not in enabled drivers build config 00:13:41.681 crypto/openssl: not in enabled drivers build config 00:13:41.681 crypto/scheduler: not in enabled drivers build config 00:13:41.681 crypto/uadk: not in enabled drivers build config 00:13:41.681 crypto/virtio: not in enabled drivers build config 00:13:41.681 compress/isal: not in enabled drivers build config 00:13:41.681 compress/mlx5: not in enabled drivers build config 00:13:41.681 compress/octeontx: not in enabled drivers build config 00:13:41.681 compress/zlib: not in enabled drivers build config 00:13:41.681 regex/*: missing internal dependency, "regexdev" 00:13:41.681 ml/*: missing internal dependency, "mldev" 00:13:41.681 vdpa/ifc: not in enabled drivers build config 00:13:41.681 vdpa/mlx5: not in enabled drivers build config 00:13:41.681 vdpa/nfp: not in enabled drivers build config 00:13:41.681 vdpa/sfc: not in enabled drivers build config 00:13:41.681 event/*: missing internal dependency, "eventdev" 00:13:41.681 baseband/*: missing internal dependency, "bbdev" 00:13:41.681 gpu/*: missing internal dependency, "gpudev" 00:13:41.681 00:13:41.681 00:13:41.681 Build targets in project: 85 00:13:41.681 00:13:41.681 DPDK 23.11.0 00:13:41.681 00:13:41.681 User defined options 00:13:41.681 buildtype : debug 00:13:41.681 default_library : shared 00:13:41.681 libdir : lib 00:13:41.681 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:41.681 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:41.681 c_link_args : 00:13:41.681 cpu_instruction_set: native 00:13:41.681 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:41.682 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:41.682 enable_docs : false 00:13:41.682 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:41.682 enable_kmods : false 00:13:41.682 tests : false 00:13:41.682 00:13:41.682 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:41.940 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:42.198 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:42.198 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:42.198 [3/265] Linking static target lib/librte_kvargs.a 00:13:42.198 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:42.198 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:42.198 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:42.198 [7/265] Linking static target lib/librte_log.a 00:13:42.198 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:42.198 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:42.198 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:42.457 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:43.024 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:43.024 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:43.024 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:43.024 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:43.282 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:43.282 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:43.282 [18/265] Linking static target lib/librte_telemetry.a 00:13:43.282 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:43.282 [20/265] Linking target lib/librte_log.so.24.0 00:13:43.282 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:43.282 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:43.282 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:43.539 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:43.539 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:13:43.539 [26/265] Linking target lib/librte_kvargs.so.24.0 00:13:43.797 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:43.797 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:13:44.055 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:44.055 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.055 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:44.055 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:44.055 [33/265] Linking target lib/librte_telemetry.so.24.0 00:13:44.055 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:44.313 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:44.313 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:44.313 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:44.313 [38/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:13:44.313 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:44.571 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:44.571 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:44.571 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:44.571 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:44.829 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:44.829 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:45.088 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:45.088 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:45.346 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:45.346 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:45.346 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:45.346 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:45.604 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:45.604 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:45.604 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:45.604 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:45.862 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:45.862 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:45.862 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:45.862 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:46.121 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:46.121 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:46.121 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:46.379 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:46.379 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:46.379 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:46.638 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:46.638 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:46.638 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:46.896 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:46.896 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:46.896 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:46.896 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:46.896 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:46.896 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:46.896 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:47.155 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:47.155 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:47.414 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:47.414 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:47.414 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:47.673 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:47.673 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:47.673 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:47.673 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:47.673 [85/265] Linking static target lib/librte_ring.a 00:13:47.932 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:47.932 [87/265] Linking static target lib/librte_eal.a 00:13:47.932 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:47.932 [89/265] Linking static target lib/librte_rcu.a 00:13:48.499 [90/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:48.499 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:48.499 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:48.499 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:48.499 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:48.499 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:48.499 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:48.499 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:48.499 [98/265] Linking static target lib/librte_mempool.a 00:13:49.067 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:49.067 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:49.067 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:49.327 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:49.327 [103/265] Linking static target lib/librte_mbuf.a 00:13:49.327 [104/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:49.586 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:49.586 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:49.586 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:49.586 [108/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:49.586 [109/265] Linking static target lib/librte_net.a 00:13:49.844 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.113 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:50.113 [112/265] Linking static target lib/librte_meter.a 00:13:50.113 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.372 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:50.372 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:50.372 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:50.372 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:50.630 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.630 [119/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:51.565 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:51.565 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:51.565 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:51.565 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:51.565 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:51.565 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:51.565 [126/265] Linking static target lib/librte_pci.a 00:13:51.565 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:51.565 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:51.565 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:51.822 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:51.822 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:51.822 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:51.822 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:51.822 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:51.822 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:52.081 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:52.081 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:52.081 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:52.081 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:52.081 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:52.081 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:52.339 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:52.339 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:52.339 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:52.339 [145/265] Linking static target lib/librte_cmdline.a 00:13:52.597 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:52.597 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:52.597 [148/265] Linking static target lib/librte_ethdev.a 00:13:52.597 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:52.855 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:52.855 [151/265] Linking static target lib/librte_timer.a 00:13:52.855 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:52.855 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:53.113 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:53.113 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:53.113 [156/265] Linking static target lib/librte_hash.a 00:13:53.113 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:53.113 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:53.113 [159/265] Linking static target lib/librte_compressdev.a 00:13:53.371 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.371 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:53.629 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:53.629 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:53.629 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:53.629 [165/265] Linking static target lib/librte_dmadev.a 00:13:53.933 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:53.933 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:54.192 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:54.192 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:54.192 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:54.192 [171/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:54.192 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:54.192 [173/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:54.192 [174/265] Linking static target lib/librte_cryptodev.a 00:13:54.450 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:54.450 [176/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:54.709 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:54.709 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:54.968 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:54.968 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:54.968 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:54.968 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:54.968 [183/265] Linking static target lib/librte_power.a 00:13:55.226 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:55.226 [185/265] Linking static target lib/librte_reorder.a 00:13:55.484 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:55.484 [187/265] Linking static target lib/librte_security.a 00:13:55.484 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:55.743 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:55.743 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:55.743 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:55.743 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:56.311 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.311 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:56.311 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:56.311 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:56.570 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:56.570 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:56.829 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.087 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:57.088 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:57.088 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:57.346 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:57.346 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:57.346 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:57.346 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:57.346 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:57.346 [208/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:57.346 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:57.609 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:57.609 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:57.609 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:57.609 [213/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:57.609 [214/265] Linking static target drivers/librte_bus_vdev.a 00:13:57.609 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:57.609 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:57.609 [217/265] Linking static target drivers/librte_bus_pci.a 00:13:57.609 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:57.609 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:57.868 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:57.868 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:57.868 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:57.868 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:57.868 [224/265] Linking static target drivers/librte_mempool_ring.a 00:13:58.126 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:58.694 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:58.694 [227/265] Linking static target lib/librte_vhost.a 00:13:59.262 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:59.262 [229/265] Linking target lib/librte_eal.so.24.0 00:13:59.521 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:13:59.521 [231/265] Linking target lib/librte_ring.so.24.0 00:13:59.521 [232/265] Linking target lib/librte_meter.so.24.0 00:13:59.521 [233/265] Linking target lib/librte_pci.so.24.0 00:13:59.521 [234/265] Linking target lib/librte_dmadev.so.24.0 00:13:59.521 [235/265] Linking target lib/librte_timer.so.24.0 00:13:59.521 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:13:59.779 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:13:59.779 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:13:59.779 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:13:59.779 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:13:59.779 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:13:59.779 [242/265] Linking target lib/librte_rcu.so.24.0 00:13:59.779 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:13:59.779 [244/265] Linking target lib/librte_mempool.so.24.0 00:13:59.779 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:14:00.038 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:14:00.038 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:14:00.038 [248/265] Linking target lib/librte_mbuf.so.24.0 00:14:00.038 [249/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.038 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:14:00.296 [251/265] Linking target lib/librte_reorder.so.24.0 00:14:00.296 [252/265] Linking target lib/librte_compressdev.so.24.0 00:14:00.296 [253/265] Linking target lib/librte_net.so.24.0 00:14:00.296 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:14:00.296 [255/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.296 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:14:00.296 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:14:00.296 [258/265] Linking target lib/librte_hash.so.24.0 00:14:00.296 [259/265] Linking target lib/librte_security.so.24.0 00:14:00.296 [260/265] Linking target lib/librte_cmdline.so.24.0 00:14:00.555 [261/265] Linking target lib/librte_ethdev.so.24.0 00:14:00.555 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:14:00.555 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:14:00.555 [264/265] Linking target lib/librte_power.so.24.0 00:14:00.555 [265/265] Linking target lib/librte_vhost.so.24.0 00:14:00.555 INFO: autodetecting backend as ninja 00:14:00.555 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:14:01.930 CC lib/ut/ut.o 00:14:01.930 CC lib/ut_mock/mock.o 00:14:01.930 CC lib/log/log.o 00:14:01.930 CC lib/log/log_flags.o 00:14:01.930 CC lib/log/log_deprecated.o 00:14:01.930 LIB libspdk_ut_mock.a 00:14:01.930 SO libspdk_ut_mock.so.6.0 00:14:01.930 LIB libspdk_log.a 00:14:01.930 LIB libspdk_ut.a 00:14:01.931 SYMLINK libspdk_ut_mock.so 00:14:01.931 SO libspdk_log.so.7.0 00:14:01.931 SO libspdk_ut.so.2.0 00:14:02.188 SYMLINK libspdk_ut.so 00:14:02.188 SYMLINK libspdk_log.so 00:14:02.188 CC lib/dma/dma.o 00:14:02.188 CC lib/ioat/ioat.o 00:14:02.188 CXX lib/trace_parser/trace.o 00:14:02.188 CC lib/util/base64.o 00:14:02.188 CC lib/util/bit_array.o 00:14:02.188 CC lib/util/cpuset.o 00:14:02.188 CC lib/util/crc32.o 00:14:02.188 CC lib/util/crc16.o 00:14:02.188 CC lib/util/crc32c.o 00:14:02.446 CC lib/vfio_user/host/vfio_user_pci.o 00:14:02.446 CC lib/util/crc32_ieee.o 00:14:02.446 CC lib/util/crc64.o 00:14:02.446 CC lib/util/dif.o 00:14:02.446 CC lib/util/fd.o 00:14:02.446 LIB libspdk_dma.a 00:14:02.446 CC lib/util/file.o 00:14:02.446 CC lib/util/hexlify.o 00:14:02.447 SO libspdk_dma.so.4.0 00:14:02.718 LIB libspdk_ioat.a 00:14:02.718 CC lib/vfio_user/host/vfio_user.o 00:14:02.718 CC lib/util/iov.o 00:14:02.718 SYMLINK libspdk_dma.so 00:14:02.718 CC lib/util/math.o 00:14:02.718 SO libspdk_ioat.so.7.0 00:14:02.718 CC lib/util/pipe.o 00:14:02.718 CC lib/util/strerror_tls.o 00:14:02.718 CC lib/util/string.o 00:14:02.718 SYMLINK libspdk_ioat.so 00:14:02.718 CC lib/util/uuid.o 00:14:02.718 CC lib/util/fd_group.o 00:14:02.718 CC lib/util/xor.o 00:14:02.718 CC lib/util/zipf.o 00:14:02.718 LIB libspdk_vfio_user.a 00:14:03.006 SO libspdk_vfio_user.so.5.0 00:14:03.006 SYMLINK libspdk_vfio_user.so 00:14:03.006 LIB libspdk_util.a 00:14:03.264 SO libspdk_util.so.9.0 00:14:03.264 SYMLINK libspdk_util.so 00:14:03.264 LIB libspdk_trace_parser.a 00:14:03.264 SO libspdk_trace_parser.so.5.0 00:14:03.521 SYMLINK libspdk_trace_parser.so 00:14:03.521 CC lib/env_dpdk/env.o 00:14:03.521 CC lib/env_dpdk/memory.o 00:14:03.521 CC lib/env_dpdk/pci.o 00:14:03.521 CC lib/env_dpdk/init.o 00:14:03.521 CC lib/env_dpdk/threads.o 00:14:03.521 CC lib/idxd/idxd.o 00:14:03.521 CC lib/vmd/vmd.o 00:14:03.521 CC lib/rdma/common.o 00:14:03.521 CC lib/json/json_parse.o 00:14:03.521 CC lib/conf/conf.o 00:14:03.521 CC lib/json/json_util.o 00:14:03.779 CC lib/idxd/idxd_user.o 00:14:03.779 LIB libspdk_conf.a 00:14:03.779 SO libspdk_conf.so.6.0 00:14:03.779 CC lib/rdma/rdma_verbs.o 00:14:03.779 SYMLINK libspdk_conf.so 00:14:03.779 CC lib/vmd/led.o 00:14:03.779 CC lib/env_dpdk/pci_ioat.o 00:14:03.779 CC lib/env_dpdk/pci_virtio.o 00:14:03.779 CC lib/json/json_write.o 00:14:04.037 CC lib/env_dpdk/pci_vmd.o 00:14:04.037 CC lib/env_dpdk/pci_idxd.o 00:14:04.037 LIB libspdk_rdma.a 00:14:04.037 CC lib/env_dpdk/pci_event.o 00:14:04.037 CC lib/env_dpdk/sigbus_handler.o 00:14:04.037 LIB libspdk_idxd.a 00:14:04.037 SO libspdk_rdma.so.6.0 00:14:04.037 SO libspdk_idxd.so.12.0 00:14:04.037 CC lib/env_dpdk/pci_dpdk.o 00:14:04.037 SYMLINK libspdk_rdma.so 00:14:04.037 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:04.037 CC lib/env_dpdk/pci_dpdk_2211.o 00:14:04.037 SYMLINK libspdk_idxd.so 00:14:04.037 LIB libspdk_json.a 00:14:04.037 LIB libspdk_vmd.a 00:14:04.294 SO libspdk_json.so.6.0 00:14:04.294 SO libspdk_vmd.so.6.0 00:14:04.294 SYMLINK libspdk_vmd.so 00:14:04.294 SYMLINK libspdk_json.so 00:14:04.613 CC lib/jsonrpc/jsonrpc_server.o 00:14:04.613 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:14:04.613 CC lib/jsonrpc/jsonrpc_client.o 00:14:04.613 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:14:04.870 LIB libspdk_jsonrpc.a 00:14:04.870 SO libspdk_jsonrpc.so.6.0 00:14:04.870 LIB libspdk_env_dpdk.a 00:14:04.870 SYMLINK libspdk_jsonrpc.so 00:14:04.870 SO libspdk_env_dpdk.so.14.0 00:14:05.127 SYMLINK libspdk_env_dpdk.so 00:14:05.127 CC lib/rpc/rpc.o 00:14:05.384 LIB libspdk_rpc.a 00:14:05.384 SO libspdk_rpc.so.6.0 00:14:05.384 SYMLINK libspdk_rpc.so 00:14:05.641 CC lib/notify/notify_rpc.o 00:14:05.641 CC lib/notify/notify.o 00:14:05.641 CC lib/keyring/keyring.o 00:14:05.641 CC lib/keyring/keyring_rpc.o 00:14:05.641 CC lib/trace/trace.o 00:14:05.641 CC lib/trace/trace_flags.o 00:14:05.641 CC lib/trace/trace_rpc.o 00:14:05.898 LIB libspdk_notify.a 00:14:05.898 SO libspdk_notify.so.6.0 00:14:05.898 LIB libspdk_trace.a 00:14:05.898 LIB libspdk_keyring.a 00:14:05.898 SYMLINK libspdk_notify.so 00:14:05.898 SO libspdk_trace.so.10.0 00:14:05.898 SO libspdk_keyring.so.1.0 00:14:06.200 SYMLINK libspdk_keyring.so 00:14:06.200 SYMLINK libspdk_trace.so 00:14:06.458 CC lib/thread/thread.o 00:14:06.458 CC lib/thread/iobuf.o 00:14:06.458 CC lib/sock/sock.o 00:14:06.458 CC lib/sock/sock_rpc.o 00:14:06.716 LIB libspdk_sock.a 00:14:06.716 SO libspdk_sock.so.9.0 00:14:06.975 SYMLINK libspdk_sock.so 00:14:07.234 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:07.234 CC lib/nvme/nvme_ctrlr.o 00:14:07.234 CC lib/nvme/nvme_ns_cmd.o 00:14:07.234 CC lib/nvme/nvme_fabric.o 00:14:07.234 CC lib/nvme/nvme_pcie_common.o 00:14:07.234 CC lib/nvme/nvme_ns.o 00:14:07.234 CC lib/nvme/nvme_pcie.o 00:14:07.234 CC lib/nvme/nvme.o 00:14:07.234 CC lib/nvme/nvme_qpair.o 00:14:08.177 CC lib/nvme/nvme_quirks.o 00:14:08.177 LIB libspdk_thread.a 00:14:08.177 CC lib/nvme/nvme_transport.o 00:14:08.177 SO libspdk_thread.so.10.0 00:14:08.177 CC lib/nvme/nvme_discovery.o 00:14:08.177 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:08.177 SYMLINK libspdk_thread.so 00:14:08.177 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:08.177 CC lib/nvme/nvme_tcp.o 00:14:08.177 CC lib/nvme/nvme_opal.o 00:14:08.177 CC lib/nvme/nvme_io_msg.o 00:14:08.435 CC lib/nvme/nvme_poll_group.o 00:14:08.694 CC lib/nvme/nvme_zns.o 00:14:08.694 CC lib/accel/accel.o 00:14:08.694 CC lib/accel/accel_rpc.o 00:14:08.952 CC lib/blob/blobstore.o 00:14:08.952 CC lib/init/json_config.o 00:14:08.952 CC lib/init/subsystem.o 00:14:08.952 CC lib/init/subsystem_rpc.o 00:14:08.952 CC lib/init/rpc.o 00:14:08.952 CC lib/accel/accel_sw.o 00:14:09.210 CC lib/nvme/nvme_stubs.o 00:14:09.210 CC lib/blob/request.o 00:14:09.210 LIB libspdk_init.a 00:14:09.210 SO libspdk_init.so.5.0 00:14:09.210 CC lib/virtio/virtio.o 00:14:09.210 SYMLINK libspdk_init.so 00:14:09.210 CC lib/blob/zeroes.o 00:14:09.210 CC lib/blob/blob_bs_dev.o 00:14:09.468 CC lib/virtio/virtio_vhost_user.o 00:14:09.468 CC lib/virtio/virtio_vfio_user.o 00:14:09.468 CC lib/virtio/virtio_pci.o 00:14:09.468 CC lib/nvme/nvme_auth.o 00:14:09.468 CC lib/nvme/nvme_cuse.o 00:14:09.726 CC lib/nvme/nvme_rdma.o 00:14:09.726 CC lib/event/app.o 00:14:09.726 LIB libspdk_virtio.a 00:14:09.726 CC lib/event/log_rpc.o 00:14:09.726 CC lib/event/reactor.o 00:14:09.726 LIB libspdk_accel.a 00:14:09.726 CC lib/event/app_rpc.o 00:14:09.726 SO libspdk_virtio.so.7.0 00:14:09.726 SO libspdk_accel.so.15.0 00:14:09.984 SYMLINK libspdk_virtio.so 00:14:09.984 CC lib/event/scheduler_static.o 00:14:09.984 SYMLINK libspdk_accel.so 00:14:09.984 CC lib/bdev/bdev.o 00:14:09.984 CC lib/bdev/bdev_rpc.o 00:14:09.984 CC lib/bdev/part.o 00:14:09.984 CC lib/bdev/bdev_zone.o 00:14:10.241 CC lib/bdev/scsi_nvme.o 00:14:10.241 LIB libspdk_event.a 00:14:10.241 SO libspdk_event.so.13.0 00:14:10.241 SYMLINK libspdk_event.so 00:14:11.174 LIB libspdk_nvme.a 00:14:11.174 SO libspdk_nvme.so.13.0 00:14:11.432 SYMLINK libspdk_nvme.so 00:14:11.691 LIB libspdk_blob.a 00:14:11.691 SO libspdk_blob.so.11.0 00:14:11.949 SYMLINK libspdk_blob.so 00:14:12.206 CC lib/lvol/lvol.o 00:14:12.206 CC lib/blobfs/blobfs.o 00:14:12.206 CC lib/blobfs/tree.o 00:14:12.786 LIB libspdk_bdev.a 00:14:12.786 SO libspdk_bdev.so.15.0 00:14:13.137 LIB libspdk_blobfs.a 00:14:13.137 SYMLINK libspdk_bdev.so 00:14:13.137 SO libspdk_blobfs.so.10.0 00:14:13.137 LIB libspdk_lvol.a 00:14:13.137 SO libspdk_lvol.so.10.0 00:14:13.137 SYMLINK libspdk_blobfs.so 00:14:13.137 SYMLINK libspdk_lvol.so 00:14:13.137 CC lib/ftl/ftl_core.o 00:14:13.137 CC lib/scsi/dev.o 00:14:13.137 CC lib/scsi/lun.o 00:14:13.137 CC lib/ftl/ftl_layout.o 00:14:13.137 CC lib/scsi/port.o 00:14:13.137 CC lib/scsi/scsi.o 00:14:13.137 CC lib/ftl/ftl_init.o 00:14:13.137 CC lib/nbd/nbd.o 00:14:13.137 CC lib/ublk/ublk.o 00:14:13.137 CC lib/nvmf/ctrlr.o 00:14:13.396 CC lib/ublk/ublk_rpc.o 00:14:13.396 CC lib/nvmf/ctrlr_discovery.o 00:14:13.396 CC lib/ftl/ftl_debug.o 00:14:13.396 CC lib/ftl/ftl_io.o 00:14:13.396 CC lib/scsi/scsi_bdev.o 00:14:13.396 CC lib/scsi/scsi_pr.o 00:14:13.396 CC lib/scsi/scsi_rpc.o 00:14:13.396 CC lib/nvmf/ctrlr_bdev.o 00:14:13.653 CC lib/nbd/nbd_rpc.o 00:14:13.653 CC lib/ftl/ftl_sb.o 00:14:13.653 CC lib/scsi/task.o 00:14:13.653 CC lib/nvmf/subsystem.o 00:14:13.653 LIB libspdk_nbd.a 00:14:13.653 SO libspdk_nbd.so.7.0 00:14:13.952 LIB libspdk_ublk.a 00:14:13.952 CC lib/ftl/ftl_l2p.o 00:14:13.952 CC lib/ftl/ftl_l2p_flat.o 00:14:13.952 SO libspdk_ublk.so.3.0 00:14:13.952 CC lib/ftl/ftl_nv_cache.o 00:14:13.952 SYMLINK libspdk_nbd.so 00:14:13.952 CC lib/nvmf/nvmf.o 00:14:13.952 SYMLINK libspdk_ublk.so 00:14:13.952 CC lib/nvmf/nvmf_rpc.o 00:14:13.952 CC lib/nvmf/transport.o 00:14:13.952 LIB libspdk_scsi.a 00:14:13.952 SO libspdk_scsi.so.9.0 00:14:13.952 CC lib/nvmf/tcp.o 00:14:13.952 CC lib/nvmf/rdma.o 00:14:14.232 SYMLINK libspdk_scsi.so 00:14:14.232 CC lib/iscsi/conn.o 00:14:14.490 CC lib/vhost/vhost.o 00:14:14.490 CC lib/vhost/vhost_rpc.o 00:14:14.749 CC lib/vhost/vhost_scsi.o 00:14:14.749 CC lib/ftl/ftl_band.o 00:14:14.749 CC lib/iscsi/init_grp.o 00:14:14.749 CC lib/ftl/ftl_band_ops.o 00:14:14.749 CC lib/ftl/ftl_writer.o 00:14:14.749 CC lib/ftl/ftl_rq.o 00:14:15.007 CC lib/iscsi/iscsi.o 00:14:15.007 CC lib/ftl/ftl_reloc.o 00:14:15.007 CC lib/vhost/vhost_blk.o 00:14:15.008 CC lib/vhost/rte_vhost_user.o 00:14:15.266 CC lib/iscsi/md5.o 00:14:15.266 CC lib/iscsi/param.o 00:14:15.266 CC lib/iscsi/portal_grp.o 00:14:15.266 CC lib/ftl/ftl_l2p_cache.o 00:14:15.525 CC lib/iscsi/tgt_node.o 00:14:15.525 CC lib/iscsi/iscsi_subsystem.o 00:14:15.525 CC lib/iscsi/iscsi_rpc.o 00:14:15.525 CC lib/iscsi/task.o 00:14:15.783 CC lib/ftl/ftl_p2l.o 00:14:15.783 CC lib/ftl/mngt/ftl_mngt.o 00:14:15.783 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_startup.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_md.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_misc.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:14:16.042 LIB libspdk_nvmf.a 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_band.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:14:16.042 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:14:16.300 LIB libspdk_vhost.a 00:14:16.300 SO libspdk_nvmf.so.18.0 00:14:16.300 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:14:16.300 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:14:16.300 SO libspdk_vhost.so.8.0 00:14:16.300 CC lib/ftl/utils/ftl_conf.o 00:14:16.300 CC lib/ftl/utils/ftl_md.o 00:14:16.300 SYMLINK libspdk_nvmf.so 00:14:16.300 CC lib/ftl/utils/ftl_mempool.o 00:14:16.300 CC lib/ftl/utils/ftl_bitmap.o 00:14:16.300 CC lib/ftl/utils/ftl_property.o 00:14:16.300 SYMLINK libspdk_vhost.so 00:14:16.300 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:14:16.558 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:14:16.558 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:14:16.558 LIB libspdk_iscsi.a 00:14:16.558 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:14:16.558 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:14:16.558 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:14:16.558 SO libspdk_iscsi.so.8.0 00:14:16.558 CC lib/ftl/upgrade/ftl_sb_v3.o 00:14:16.558 CC lib/ftl/upgrade/ftl_sb_v5.o 00:14:16.817 CC lib/ftl/nvc/ftl_nvc_dev.o 00:14:16.817 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:14:16.817 CC lib/ftl/base/ftl_base_dev.o 00:14:16.817 CC lib/ftl/base/ftl_base_bdev.o 00:14:16.817 CC lib/ftl/ftl_trace.o 00:14:16.817 SYMLINK libspdk_iscsi.so 00:14:17.074 LIB libspdk_ftl.a 00:14:17.332 SO libspdk_ftl.so.9.0 00:14:17.590 SYMLINK libspdk_ftl.so 00:14:18.157 CC module/env_dpdk/env_dpdk_rpc.o 00:14:18.157 CC module/sock/posix/posix.o 00:14:18.157 CC module/blob/bdev/blob_bdev.o 00:14:18.157 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:18.157 CC module/accel/ioat/accel_ioat.o 00:14:18.157 CC module/scheduler/gscheduler/gscheduler.o 00:14:18.157 CC module/sock/uring/uring.o 00:14:18.157 CC module/keyring/file/keyring.o 00:14:18.157 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:18.157 CC module/accel/error/accel_error.o 00:14:18.157 LIB libspdk_env_dpdk_rpc.a 00:14:18.157 SO libspdk_env_dpdk_rpc.so.6.0 00:14:18.157 LIB libspdk_scheduler_gscheduler.a 00:14:18.416 SYMLINK libspdk_env_dpdk_rpc.so 00:14:18.416 CC module/keyring/file/keyring_rpc.o 00:14:18.416 LIB libspdk_scheduler_dpdk_governor.a 00:14:18.416 CC module/accel/ioat/accel_ioat_rpc.o 00:14:18.416 SO libspdk_scheduler_gscheduler.so.4.0 00:14:18.416 LIB libspdk_scheduler_dynamic.a 00:14:18.416 SO libspdk_scheduler_dpdk_governor.so.4.0 00:14:18.416 CC module/accel/error/accel_error_rpc.o 00:14:18.416 SYMLINK libspdk_scheduler_gscheduler.so 00:14:18.416 SO libspdk_scheduler_dynamic.so.4.0 00:14:18.416 SYMLINK libspdk_scheduler_dpdk_governor.so 00:14:18.416 LIB libspdk_blob_bdev.a 00:14:18.416 SYMLINK libspdk_scheduler_dynamic.so 00:14:18.416 SO libspdk_blob_bdev.so.11.0 00:14:18.416 LIB libspdk_keyring_file.a 00:14:18.416 LIB libspdk_accel_ioat.a 00:14:18.416 SO libspdk_keyring_file.so.1.0 00:14:18.416 SYMLINK libspdk_blob_bdev.so 00:14:18.416 SO libspdk_accel_ioat.so.6.0 00:14:18.416 LIB libspdk_accel_error.a 00:14:18.416 CC module/accel/dsa/accel_dsa.o 00:14:18.416 CC module/accel/dsa/accel_dsa_rpc.o 00:14:18.416 SYMLINK libspdk_keyring_file.so 00:14:18.674 SO libspdk_accel_error.so.2.0 00:14:18.674 CC module/accel/iaa/accel_iaa.o 00:14:18.674 SYMLINK libspdk_accel_ioat.so 00:14:18.674 CC module/accel/iaa/accel_iaa_rpc.o 00:14:18.674 SYMLINK libspdk_accel_error.so 00:14:18.674 LIB libspdk_accel_iaa.a 00:14:18.674 CC module/bdev/delay/vbdev_delay.o 00:14:18.674 CC module/blobfs/bdev/blobfs_bdev.o 00:14:18.674 CC module/bdev/error/vbdev_error.o 00:14:18.674 SO libspdk_accel_iaa.so.3.0 00:14:18.674 CC module/bdev/gpt/gpt.o 00:14:18.933 LIB libspdk_sock_posix.a 00:14:18.933 CC module/bdev/lvol/vbdev_lvol.o 00:14:18.933 LIB libspdk_accel_dsa.a 00:14:18.933 SO libspdk_sock_posix.so.6.0 00:14:18.933 LIB libspdk_sock_uring.a 00:14:18.933 SO libspdk_accel_dsa.so.5.0 00:14:18.933 SYMLINK libspdk_accel_iaa.so 00:14:18.933 CC module/bdev/error/vbdev_error_rpc.o 00:14:18.933 CC module/bdev/malloc/bdev_malloc.o 00:14:18.933 SO libspdk_sock_uring.so.5.0 00:14:18.933 SYMLINK libspdk_sock_posix.so 00:14:18.933 CC module/bdev/gpt/vbdev_gpt.o 00:14:18.933 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:18.933 SYMLINK libspdk_sock_uring.so 00:14:18.933 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:18.933 SYMLINK libspdk_accel_dsa.so 00:14:18.933 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:18.933 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:19.191 LIB libspdk_bdev_error.a 00:14:19.191 SO libspdk_bdev_error.so.6.0 00:14:19.191 LIB libspdk_blobfs_bdev.a 00:14:19.191 SYMLINK libspdk_bdev_error.so 00:14:19.191 LIB libspdk_bdev_delay.a 00:14:19.191 SO libspdk_blobfs_bdev.so.6.0 00:14:19.191 LIB libspdk_bdev_gpt.a 00:14:19.191 SO libspdk_bdev_delay.so.6.0 00:14:19.191 SO libspdk_bdev_gpt.so.6.0 00:14:19.191 LIB libspdk_bdev_malloc.a 00:14:19.449 SYMLINK libspdk_blobfs_bdev.so 00:14:19.449 CC module/bdev/null/bdev_null.o 00:14:19.449 SO libspdk_bdev_malloc.so.6.0 00:14:19.449 SYMLINK libspdk_bdev_delay.so 00:14:19.449 SYMLINK libspdk_bdev_gpt.so 00:14:19.449 CC module/bdev/null/bdev_null_rpc.o 00:14:19.449 SYMLINK libspdk_bdev_malloc.so 00:14:19.449 LIB libspdk_bdev_lvol.a 00:14:19.449 CC module/bdev/passthru/vbdev_passthru.o 00:14:19.449 CC module/bdev/nvme/bdev_nvme.o 00:14:19.449 CC module/bdev/raid/bdev_raid.o 00:14:19.449 SO libspdk_bdev_lvol.so.6.0 00:14:19.449 CC module/bdev/split/vbdev_split.o 00:14:19.449 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:19.449 SYMLINK libspdk_bdev_lvol.so 00:14:19.449 CC module/bdev/uring/bdev_uring.o 00:14:19.707 CC module/bdev/aio/bdev_aio.o 00:14:19.707 LIB libspdk_bdev_null.a 00:14:19.707 SO libspdk_bdev_null.so.6.0 00:14:19.707 CC module/bdev/ftl/bdev_ftl.o 00:14:19.707 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:19.707 CC module/bdev/iscsi/bdev_iscsi.o 00:14:19.707 CC module/bdev/split/vbdev_split_rpc.o 00:14:19.707 SYMLINK libspdk_bdev_null.so 00:14:19.981 LIB libspdk_bdev_passthru.a 00:14:19.981 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:19.981 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:19.981 SO libspdk_bdev_passthru.so.6.0 00:14:19.981 LIB libspdk_bdev_split.a 00:14:19.981 CC module/bdev/uring/bdev_uring_rpc.o 00:14:19.981 CC module/bdev/aio/bdev_aio_rpc.o 00:14:19.981 SO libspdk_bdev_split.so.6.0 00:14:19.981 SYMLINK libspdk_bdev_passthru.so 00:14:19.981 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:19.981 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:19.981 SYMLINK libspdk_bdev_split.so 00:14:19.981 CC module/bdev/raid/bdev_raid_rpc.o 00:14:19.981 LIB libspdk_bdev_zone_block.a 00:14:20.245 SO libspdk_bdev_zone_block.so.6.0 00:14:20.245 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:14:20.245 LIB libspdk_bdev_uring.a 00:14:20.245 LIB libspdk_bdev_aio.a 00:14:20.245 SO libspdk_bdev_uring.so.6.0 00:14:20.245 SO libspdk_bdev_aio.so.6.0 00:14:20.245 SYMLINK libspdk_bdev_zone_block.so 00:14:20.245 CC module/bdev/raid/bdev_raid_sb.o 00:14:20.245 SYMLINK libspdk_bdev_uring.so 00:14:20.245 SYMLINK libspdk_bdev_aio.so 00:14:20.245 CC module/bdev/nvme/nvme_rpc.o 00:14:20.245 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:20.245 LIB libspdk_bdev_ftl.a 00:14:20.245 CC module/bdev/nvme/bdev_mdns_client.o 00:14:20.245 SO libspdk_bdev_ftl.so.6.0 00:14:20.245 LIB libspdk_bdev_iscsi.a 00:14:20.245 SYMLINK libspdk_bdev_ftl.so 00:14:20.245 CC module/bdev/nvme/vbdev_opal.o 00:14:20.503 SO libspdk_bdev_iscsi.so.6.0 00:14:20.503 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:20.503 SYMLINK libspdk_bdev_iscsi.so 00:14:20.503 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:20.503 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:20.503 CC module/bdev/raid/raid0.o 00:14:20.503 CC module/bdev/raid/raid1.o 00:14:20.503 CC module/bdev/raid/concat.o 00:14:20.761 LIB libspdk_bdev_virtio.a 00:14:20.761 SO libspdk_bdev_virtio.so.6.0 00:14:20.761 LIB libspdk_bdev_raid.a 00:14:20.761 SO libspdk_bdev_raid.so.6.0 00:14:20.761 SYMLINK libspdk_bdev_virtio.so 00:14:21.020 SYMLINK libspdk_bdev_raid.so 00:14:21.586 LIB libspdk_bdev_nvme.a 00:14:21.847 SO libspdk_bdev_nvme.so.7.0 00:14:21.847 SYMLINK libspdk_bdev_nvme.so 00:14:22.422 CC module/event/subsystems/iobuf/iobuf.o 00:14:22.422 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:22.422 CC module/event/subsystems/keyring/keyring.o 00:14:22.422 CC module/event/subsystems/sock/sock.o 00:14:22.422 CC module/event/subsystems/vmd/vmd.o 00:14:22.422 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:22.422 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:22.422 CC module/event/subsystems/scheduler/scheduler.o 00:14:22.422 LIB libspdk_event_sock.a 00:14:22.422 LIB libspdk_event_keyring.a 00:14:22.422 LIB libspdk_event_vhost_blk.a 00:14:22.422 LIB libspdk_event_vmd.a 00:14:22.422 LIB libspdk_event_iobuf.a 00:14:22.422 LIB libspdk_event_scheduler.a 00:14:22.422 SO libspdk_event_keyring.so.1.0 00:14:22.422 SO libspdk_event_vhost_blk.so.3.0 00:14:22.422 SO libspdk_event_sock.so.5.0 00:14:22.679 SO libspdk_event_vmd.so.6.0 00:14:22.679 SO libspdk_event_scheduler.so.4.0 00:14:22.679 SO libspdk_event_iobuf.so.3.0 00:14:22.679 SYMLINK libspdk_event_vhost_blk.so 00:14:22.679 SYMLINK libspdk_event_sock.so 00:14:22.679 SYMLINK libspdk_event_keyring.so 00:14:22.679 SYMLINK libspdk_event_scheduler.so 00:14:22.679 SYMLINK libspdk_event_vmd.so 00:14:22.679 SYMLINK libspdk_event_iobuf.so 00:14:22.937 CC module/event/subsystems/accel/accel.o 00:14:23.195 LIB libspdk_event_accel.a 00:14:23.195 SO libspdk_event_accel.so.6.0 00:14:23.195 SYMLINK libspdk_event_accel.so 00:14:23.453 CC module/event/subsystems/bdev/bdev.o 00:14:23.711 LIB libspdk_event_bdev.a 00:14:23.711 SO libspdk_event_bdev.so.6.0 00:14:23.711 SYMLINK libspdk_event_bdev.so 00:14:23.969 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:23.969 CC module/event/subsystems/nbd/nbd.o 00:14:23.969 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:23.969 CC module/event/subsystems/ublk/ublk.o 00:14:23.969 CC module/event/subsystems/scsi/scsi.o 00:14:24.227 LIB libspdk_event_nbd.a 00:14:24.227 LIB libspdk_event_ublk.a 00:14:24.227 SO libspdk_event_nbd.so.6.0 00:14:24.227 SO libspdk_event_ublk.so.3.0 00:14:24.227 LIB libspdk_event_scsi.a 00:14:24.227 SO libspdk_event_scsi.so.6.0 00:14:24.227 SYMLINK libspdk_event_nbd.so 00:14:24.227 SYMLINK libspdk_event_ublk.so 00:14:24.227 LIB libspdk_event_nvmf.a 00:14:24.227 SYMLINK libspdk_event_scsi.so 00:14:24.228 SO libspdk_event_nvmf.so.6.0 00:14:24.486 SYMLINK libspdk_event_nvmf.so 00:14:24.486 CC module/event/subsystems/iscsi/iscsi.o 00:14:24.486 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:24.745 LIB libspdk_event_vhost_scsi.a 00:14:24.745 LIB libspdk_event_iscsi.a 00:14:24.745 SO libspdk_event_vhost_scsi.so.3.0 00:14:24.745 SO libspdk_event_iscsi.so.6.0 00:14:24.745 SYMLINK libspdk_event_vhost_scsi.so 00:14:25.004 SYMLINK libspdk_event_iscsi.so 00:14:25.004 SO libspdk.so.6.0 00:14:25.004 SYMLINK libspdk.so 00:14:25.263 CC app/trace_record/trace_record.o 00:14:25.263 CXX app/trace/trace.o 00:14:25.263 CC examples/nvme/hello_world/hello_world.o 00:14:25.263 CC examples/accel/perf/accel_perf.o 00:14:25.520 CC examples/ioat/perf/perf.o 00:14:25.520 CC test/app/bdev_svc/bdev_svc.o 00:14:25.520 CC test/bdev/bdevio/bdevio.o 00:14:25.520 CC examples/blob/hello_world/hello_blob.o 00:14:25.520 CC examples/bdev/hello_world/hello_bdev.o 00:14:25.520 CC test/accel/dif/dif.o 00:14:25.520 LINK spdk_trace_record 00:14:25.520 LINK ioat_perf 00:14:25.779 LINK hello_world 00:14:25.779 LINK bdev_svc 00:14:25.779 LINK spdk_trace 00:14:25.779 LINK hello_blob 00:14:25.779 LINK hello_bdev 00:14:25.779 CC examples/bdev/bdevperf/bdevperf.o 00:14:25.779 LINK bdevio 00:14:25.779 CC examples/ioat/verify/verify.o 00:14:25.779 LINK accel_perf 00:14:26.037 CC examples/nvme/reconnect/reconnect.o 00:14:26.037 LINK dif 00:14:26.037 CC test/app/histogram_perf/histogram_perf.o 00:14:26.037 CC examples/blob/cli/blobcli.o 00:14:26.037 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:26.037 CC app/nvmf_tgt/nvmf_main.o 00:14:26.037 LINK verify 00:14:26.295 LINK histogram_perf 00:14:26.295 CC examples/sock/hello_world/hello_sock.o 00:14:26.295 LINK nvmf_tgt 00:14:26.295 TEST_HEADER include/spdk/accel.h 00:14:26.295 CC test/blobfs/mkfs/mkfs.o 00:14:26.295 TEST_HEADER include/spdk/accel_module.h 00:14:26.295 TEST_HEADER include/spdk/assert.h 00:14:26.295 TEST_HEADER include/spdk/barrier.h 00:14:26.295 TEST_HEADER include/spdk/base64.h 00:14:26.295 TEST_HEADER include/spdk/bdev.h 00:14:26.295 LINK reconnect 00:14:26.295 TEST_HEADER include/spdk/bdev_module.h 00:14:26.295 TEST_HEADER include/spdk/bdev_zone.h 00:14:26.295 TEST_HEADER include/spdk/bit_array.h 00:14:26.295 TEST_HEADER include/spdk/bit_pool.h 00:14:26.295 TEST_HEADER include/spdk/blob_bdev.h 00:14:26.295 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:26.295 TEST_HEADER include/spdk/blobfs.h 00:14:26.295 TEST_HEADER include/spdk/blob.h 00:14:26.295 TEST_HEADER include/spdk/conf.h 00:14:26.295 TEST_HEADER include/spdk/config.h 00:14:26.295 TEST_HEADER include/spdk/cpuset.h 00:14:26.296 TEST_HEADER include/spdk/crc16.h 00:14:26.296 TEST_HEADER include/spdk/crc32.h 00:14:26.296 TEST_HEADER include/spdk/crc64.h 00:14:26.296 TEST_HEADER include/spdk/dif.h 00:14:26.296 TEST_HEADER include/spdk/dma.h 00:14:26.296 TEST_HEADER include/spdk/endian.h 00:14:26.296 TEST_HEADER include/spdk/env_dpdk.h 00:14:26.296 TEST_HEADER include/spdk/env.h 00:14:26.296 TEST_HEADER include/spdk/event.h 00:14:26.296 TEST_HEADER include/spdk/fd_group.h 00:14:26.296 TEST_HEADER include/spdk/fd.h 00:14:26.296 TEST_HEADER include/spdk/file.h 00:14:26.296 TEST_HEADER include/spdk/ftl.h 00:14:26.296 TEST_HEADER include/spdk/gpt_spec.h 00:14:26.296 TEST_HEADER include/spdk/hexlify.h 00:14:26.296 TEST_HEADER include/spdk/histogram_data.h 00:14:26.296 TEST_HEADER include/spdk/idxd.h 00:14:26.296 TEST_HEADER include/spdk/idxd_spec.h 00:14:26.296 TEST_HEADER include/spdk/init.h 00:14:26.296 TEST_HEADER include/spdk/ioat.h 00:14:26.296 TEST_HEADER include/spdk/ioat_spec.h 00:14:26.296 TEST_HEADER include/spdk/iscsi_spec.h 00:14:26.296 TEST_HEADER include/spdk/json.h 00:14:26.296 TEST_HEADER include/spdk/jsonrpc.h 00:14:26.296 TEST_HEADER include/spdk/keyring.h 00:14:26.296 TEST_HEADER include/spdk/keyring_module.h 00:14:26.296 TEST_HEADER include/spdk/likely.h 00:14:26.296 TEST_HEADER include/spdk/log.h 00:14:26.296 TEST_HEADER include/spdk/lvol.h 00:14:26.296 TEST_HEADER include/spdk/memory.h 00:14:26.296 TEST_HEADER include/spdk/mmio.h 00:14:26.296 TEST_HEADER include/spdk/nbd.h 00:14:26.296 TEST_HEADER include/spdk/notify.h 00:14:26.296 TEST_HEADER include/spdk/nvme.h 00:14:26.296 TEST_HEADER include/spdk/nvme_intel.h 00:14:26.296 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:26.296 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:26.296 TEST_HEADER include/spdk/nvme_spec.h 00:14:26.296 TEST_HEADER include/spdk/nvme_zns.h 00:14:26.296 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:26.296 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:26.296 TEST_HEADER include/spdk/nvmf.h 00:14:26.296 TEST_HEADER include/spdk/nvmf_spec.h 00:14:26.296 CC examples/vmd/lsvmd/lsvmd.o 00:14:26.296 TEST_HEADER include/spdk/nvmf_transport.h 00:14:26.296 TEST_HEADER include/spdk/opal.h 00:14:26.296 TEST_HEADER include/spdk/opal_spec.h 00:14:26.296 TEST_HEADER include/spdk/pci_ids.h 00:14:26.296 TEST_HEADER include/spdk/pipe.h 00:14:26.296 TEST_HEADER include/spdk/queue.h 00:14:26.554 TEST_HEADER include/spdk/reduce.h 00:14:26.554 TEST_HEADER include/spdk/rpc.h 00:14:26.554 TEST_HEADER include/spdk/scheduler.h 00:14:26.554 TEST_HEADER include/spdk/scsi.h 00:14:26.554 TEST_HEADER include/spdk/scsi_spec.h 00:14:26.554 TEST_HEADER include/spdk/sock.h 00:14:26.554 TEST_HEADER include/spdk/stdinc.h 00:14:26.554 TEST_HEADER include/spdk/string.h 00:14:26.554 TEST_HEADER include/spdk/thread.h 00:14:26.554 TEST_HEADER include/spdk/trace.h 00:14:26.554 TEST_HEADER include/spdk/trace_parser.h 00:14:26.554 TEST_HEADER include/spdk/tree.h 00:14:26.554 TEST_HEADER include/spdk/ublk.h 00:14:26.554 TEST_HEADER include/spdk/util.h 00:14:26.554 TEST_HEADER include/spdk/uuid.h 00:14:26.554 LINK nvme_fuzz 00:14:26.554 TEST_HEADER include/spdk/version.h 00:14:26.554 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:26.554 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:26.554 TEST_HEADER include/spdk/vhost.h 00:14:26.554 TEST_HEADER include/spdk/vmd.h 00:14:26.554 TEST_HEADER include/spdk/xor.h 00:14:26.554 TEST_HEADER include/spdk/zipf.h 00:14:26.554 CXX test/cpp_headers/accel.o 00:14:26.554 LINK mkfs 00:14:26.554 LINK hello_sock 00:14:26.554 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:26.554 LINK lsvmd 00:14:26.554 CC app/iscsi_tgt/iscsi_tgt.o 00:14:26.554 CC examples/nvmf/nvmf/nvmf.o 00:14:26.554 LINK bdevperf 00:14:26.554 LINK blobcli 00:14:26.812 CXX test/cpp_headers/accel_module.o 00:14:26.812 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:26.812 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:26.812 CC examples/vmd/led/led.o 00:14:26.812 LINK iscsi_tgt 00:14:26.812 CXX test/cpp_headers/assert.o 00:14:26.812 CC test/dma/test_dma/test_dma.o 00:14:26.812 LINK nvmf 00:14:26.812 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:27.070 LINK led 00:14:27.070 CC app/spdk_tgt/spdk_tgt.o 00:14:27.070 LINK nvme_manage 00:14:27.070 CXX test/cpp_headers/barrier.o 00:14:27.070 CC test/env/mem_callbacks/mem_callbacks.o 00:14:27.070 CC test/env/vtophys/vtophys.o 00:14:27.329 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:27.329 CC test/env/memory/memory_ut.o 00:14:27.329 LINK spdk_tgt 00:14:27.329 CXX test/cpp_headers/base64.o 00:14:27.329 LINK vtophys 00:14:27.329 LINK test_dma 00:14:27.329 LINK vhost_fuzz 00:14:27.329 CC examples/nvme/arbitration/arbitration.o 00:14:27.329 LINK env_dpdk_post_init 00:14:27.329 CXX test/cpp_headers/bdev.o 00:14:27.587 CC app/spdk_lspci/spdk_lspci.o 00:14:27.587 CC examples/nvme/hotplug/hotplug.o 00:14:27.587 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:27.587 CXX test/cpp_headers/bdev_module.o 00:14:27.587 CC examples/nvme/abort/abort.o 00:14:27.587 CC app/spdk_nvme_perf/perf.o 00:14:27.845 LINK spdk_lspci 00:14:27.845 LINK arbitration 00:14:27.845 LINK mem_callbacks 00:14:27.845 LINK hotplug 00:14:27.845 LINK cmb_copy 00:14:27.845 CXX test/cpp_headers/bdev_zone.o 00:14:27.845 CC app/spdk_nvme_identify/identify.o 00:14:28.103 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:28.103 CC app/spdk_nvme_discover/discovery_aer.o 00:14:28.103 LINK abort 00:14:28.103 CC app/spdk_top/spdk_top.o 00:14:28.103 CXX test/cpp_headers/bit_array.o 00:14:28.103 CC app/vhost/vhost.o 00:14:28.103 LINK pmr_persistence 00:14:28.103 LINK memory_ut 00:14:28.103 LINK spdk_nvme_discover 00:14:28.361 CXX test/cpp_headers/bit_pool.o 00:14:28.361 LINK vhost 00:14:28.361 CC examples/util/zipf/zipf.o 00:14:28.361 CXX test/cpp_headers/blob_bdev.o 00:14:28.361 CC test/env/pci/pci_ut.o 00:14:28.620 LINK iscsi_fuzz 00:14:28.620 CC examples/thread/thread/thread_ex.o 00:14:28.620 LINK spdk_nvme_perf 00:14:28.620 LINK zipf 00:14:28.620 CXX test/cpp_headers/blobfs_bdev.o 00:14:28.620 CC test/event/event_perf/event_perf.o 00:14:28.620 CXX test/cpp_headers/blobfs.o 00:14:28.620 CXX test/cpp_headers/blob.o 00:14:28.620 LINK spdk_nvme_identify 00:14:28.877 LINK thread 00:14:28.877 LINK event_perf 00:14:28.877 CC test/app/jsoncat/jsoncat.o 00:14:28.877 CC test/lvol/esnap/esnap.o 00:14:28.877 LINK pci_ut 00:14:28.877 CXX test/cpp_headers/conf.o 00:14:28.877 LINK spdk_top 00:14:28.877 CC test/nvme/aer/aer.o 00:14:28.877 CC test/nvme/reset/reset.o 00:14:29.135 LINK jsoncat 00:14:29.135 CC app/spdk_dd/spdk_dd.o 00:14:29.135 CXX test/cpp_headers/config.o 00:14:29.135 CC test/event/reactor/reactor.o 00:14:29.135 CXX test/cpp_headers/cpuset.o 00:14:29.135 CC examples/idxd/perf/perf.o 00:14:29.135 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:29.135 CC test/rpc_client/rpc_client_test.o 00:14:29.135 LINK reset 00:14:29.135 LINK reactor 00:14:29.135 CC test/app/stub/stub.o 00:14:29.393 LINK aer 00:14:29.393 CXX test/cpp_headers/crc16.o 00:14:29.393 LINK interrupt_tgt 00:14:29.393 LINK rpc_client_test 00:14:29.393 LINK stub 00:14:29.393 CXX test/cpp_headers/crc32.o 00:14:29.393 CC test/event/reactor_perf/reactor_perf.o 00:14:29.651 LINK spdk_dd 00:14:29.651 CC test/event/app_repeat/app_repeat.o 00:14:29.651 LINK idxd_perf 00:14:29.651 CC test/nvme/sgl/sgl.o 00:14:29.651 CXX test/cpp_headers/crc64.o 00:14:29.651 CXX test/cpp_headers/dif.o 00:14:29.651 CXX test/cpp_headers/dma.o 00:14:29.651 LINK reactor_perf 00:14:29.651 LINK app_repeat 00:14:29.651 CXX test/cpp_headers/endian.o 00:14:29.651 CC test/event/scheduler/scheduler.o 00:14:29.651 CXX test/cpp_headers/env_dpdk.o 00:14:29.651 CXX test/cpp_headers/env.o 00:14:29.909 LINK sgl 00:14:29.909 CC app/fio/nvme/fio_plugin.o 00:14:29.909 CC test/nvme/e2edp/nvme_dp.o 00:14:29.909 CXX test/cpp_headers/event.o 00:14:29.909 CC test/nvme/overhead/overhead.o 00:14:29.909 CC test/thread/poller_perf/poller_perf.o 00:14:29.909 LINK scheduler 00:14:29.909 CC test/nvme/err_injection/err_injection.o 00:14:30.167 CC app/fio/bdev/fio_plugin.o 00:14:30.167 CC test/nvme/startup/startup.o 00:14:30.167 CXX test/cpp_headers/fd_group.o 00:14:30.167 LINK poller_perf 00:14:30.167 LINK nvme_dp 00:14:30.167 LINK err_injection 00:14:30.167 LINK overhead 00:14:30.167 LINK startup 00:14:30.167 CC test/nvme/reserve/reserve.o 00:14:30.425 CXX test/cpp_headers/fd.o 00:14:30.425 CC test/nvme/simple_copy/simple_copy.o 00:14:30.425 CC test/nvme/connect_stress/connect_stress.o 00:14:30.425 LINK spdk_nvme 00:14:30.425 CXX test/cpp_headers/file.o 00:14:30.425 CC test/nvme/boot_partition/boot_partition.o 00:14:30.425 CC test/nvme/compliance/nvme_compliance.o 00:14:30.425 LINK reserve 00:14:30.425 CC test/nvme/fused_ordering/fused_ordering.o 00:14:30.683 LINK spdk_bdev 00:14:30.683 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:30.683 LINK simple_copy 00:14:30.683 LINK connect_stress 00:14:30.683 CXX test/cpp_headers/ftl.o 00:14:30.683 LINK boot_partition 00:14:30.683 CXX test/cpp_headers/gpt_spec.o 00:14:30.683 LINK fused_ordering 00:14:30.683 CC test/nvme/fdp/fdp.o 00:14:30.942 CXX test/cpp_headers/hexlify.o 00:14:30.942 CXX test/cpp_headers/histogram_data.o 00:14:30.942 LINK nvme_compliance 00:14:30.942 LINK doorbell_aers 00:14:30.942 CXX test/cpp_headers/idxd.o 00:14:30.942 CXX test/cpp_headers/idxd_spec.o 00:14:30.942 CXX test/cpp_headers/init.o 00:14:30.942 CC test/nvme/cuse/cuse.o 00:14:30.942 CXX test/cpp_headers/ioat.o 00:14:30.942 CXX test/cpp_headers/ioat_spec.o 00:14:30.942 CXX test/cpp_headers/iscsi_spec.o 00:14:30.942 CXX test/cpp_headers/json.o 00:14:30.942 CXX test/cpp_headers/jsonrpc.o 00:14:30.942 CXX test/cpp_headers/keyring.o 00:14:30.942 CXX test/cpp_headers/keyring_module.o 00:14:31.201 LINK fdp 00:14:31.201 CXX test/cpp_headers/likely.o 00:14:31.201 CXX test/cpp_headers/log.o 00:14:31.201 CXX test/cpp_headers/lvol.o 00:14:31.201 CXX test/cpp_headers/memory.o 00:14:31.201 CXX test/cpp_headers/mmio.o 00:14:31.201 CXX test/cpp_headers/nbd.o 00:14:31.201 CXX test/cpp_headers/notify.o 00:14:31.201 CXX test/cpp_headers/nvme.o 00:14:31.201 CXX test/cpp_headers/nvme_intel.o 00:14:31.460 CXX test/cpp_headers/nvme_ocssd.o 00:14:31.460 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:31.460 CXX test/cpp_headers/nvme_spec.o 00:14:31.460 CXX test/cpp_headers/nvme_zns.o 00:14:31.460 CXX test/cpp_headers/nvmf_cmd.o 00:14:31.460 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:31.460 CXX test/cpp_headers/nvmf.o 00:14:31.460 CXX test/cpp_headers/nvmf_spec.o 00:14:31.460 CXX test/cpp_headers/nvmf_transport.o 00:14:31.460 CXX test/cpp_headers/opal.o 00:14:31.460 CXX test/cpp_headers/opal_spec.o 00:14:31.460 CXX test/cpp_headers/pci_ids.o 00:14:31.718 CXX test/cpp_headers/pipe.o 00:14:31.718 CXX test/cpp_headers/queue.o 00:14:31.718 CXX test/cpp_headers/reduce.o 00:14:31.718 CXX test/cpp_headers/rpc.o 00:14:31.718 CXX test/cpp_headers/scheduler.o 00:14:31.718 CXX test/cpp_headers/scsi.o 00:14:31.718 CXX test/cpp_headers/scsi_spec.o 00:14:31.718 CXX test/cpp_headers/sock.o 00:14:31.718 CXX test/cpp_headers/stdinc.o 00:14:31.718 CXX test/cpp_headers/string.o 00:14:31.718 CXX test/cpp_headers/thread.o 00:14:31.977 CXX test/cpp_headers/trace.o 00:14:31.977 CXX test/cpp_headers/trace_parser.o 00:14:31.977 CXX test/cpp_headers/tree.o 00:14:31.977 CXX test/cpp_headers/ublk.o 00:14:31.977 CXX test/cpp_headers/util.o 00:14:31.977 CXX test/cpp_headers/uuid.o 00:14:31.977 CXX test/cpp_headers/version.o 00:14:31.977 CXX test/cpp_headers/vfio_user_pci.o 00:14:31.977 CXX test/cpp_headers/vfio_user_spec.o 00:14:31.977 LINK cuse 00:14:31.977 CXX test/cpp_headers/vhost.o 00:14:31.977 CXX test/cpp_headers/vmd.o 00:14:31.977 CXX test/cpp_headers/xor.o 00:14:31.977 CXX test/cpp_headers/zipf.o 00:14:33.422 LINK esnap 00:14:33.989 00:14:33.989 real 1m3.364s 00:14:33.989 user 6m28.387s 00:14:33.989 sys 1m34.546s 00:14:33.989 12:11:27 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:14:33.989 12:11:27 -- common/autotest_common.sh@10 -- $ set +x 00:14:33.989 ************************************ 00:14:33.989 END TEST make 00:14:33.989 ************************************ 00:14:33.989 12:11:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:33.989 12:11:27 -- pm/common@30 -- $ signal_monitor_resources TERM 00:14:33.989 12:11:27 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:14:33.989 12:11:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:33.989 12:11:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:33.989 12:11:27 -- pm/common@45 -- $ pid=5210 00:14:33.989 12:11:27 -- pm/common@52 -- $ sudo kill -TERM 5210 00:14:33.989 12:11:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:33.989 12:11:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:33.989 12:11:27 -- pm/common@45 -- $ pid=5211 00:14:33.989 12:11:27 -- pm/common@52 -- $ sudo kill -TERM 5211 00:14:33.989 12:11:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.989 12:11:27 -- nvmf/common.sh@7 -- # uname -s 00:14:33.989 12:11:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.989 12:11:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.989 12:11:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.989 12:11:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.989 12:11:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.989 12:11:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.989 12:11:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.989 12:11:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.989 12:11:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.989 12:11:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.248 12:11:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:14:34.248 12:11:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:14:34.248 12:11:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.248 12:11:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.248 12:11:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.248 12:11:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.248 12:11:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.248 12:11:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.248 12:11:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.248 12:11:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.248 12:11:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.248 12:11:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.248 12:11:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.248 12:11:27 -- paths/export.sh@5 -- # export PATH 00:14:34.248 12:11:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.248 12:11:27 -- nvmf/common.sh@47 -- # : 0 00:14:34.248 12:11:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.248 12:11:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.248 12:11:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.248 12:11:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.248 12:11:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.248 12:11:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.248 12:11:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.248 12:11:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.248 12:11:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:34.248 12:11:27 -- spdk/autotest.sh@32 -- # uname -s 00:14:34.248 12:11:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:34.248 12:11:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:34.248 12:11:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:34.248 12:11:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:34.248 12:11:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:34.248 12:11:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:34.248 12:11:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:34.248 12:11:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:34.248 12:11:27 -- spdk/autotest.sh@48 -- # udevadm_pid=52202 00:14:34.248 12:11:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:34.248 12:11:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:34.248 12:11:27 -- pm/common@17 -- # local monitor 00:14:34.248 12:11:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:34.248 12:11:27 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52204 00:14:34.248 12:11:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:34.248 12:11:27 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52205 00:14:34.248 12:11:27 -- pm/common@26 -- # sleep 1 00:14:34.248 12:11:27 -- pm/common@21 -- # date +%s 00:14:34.248 12:11:27 -- pm/common@21 -- # date +%s 00:14:34.248 12:11:27 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714133487 00:14:34.248 12:11:27 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714133487 00:14:34.248 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714133487_collect-vmstat.pm.log 00:14:34.248 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714133487_collect-cpu-load.pm.log 00:14:35.184 12:11:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:35.184 12:11:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:35.184 12:11:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:35.184 12:11:28 -- common/autotest_common.sh@10 -- # set +x 00:14:35.184 12:11:28 -- spdk/autotest.sh@59 -- # create_test_list 00:14:35.184 12:11:28 -- common/autotest_common.sh@734 -- # xtrace_disable 00:14:35.184 12:11:28 -- common/autotest_common.sh@10 -- # set +x 00:14:35.184 12:11:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:35.184 12:11:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:35.184 12:11:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:35.184 12:11:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:35.184 12:11:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:35.184 12:11:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:35.184 12:11:28 -- common/autotest_common.sh@1441 -- # uname 00:14:35.184 12:11:28 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:14:35.184 12:11:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:35.184 12:11:28 -- common/autotest_common.sh@1461 -- # uname 00:14:35.184 12:11:28 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:14:35.184 12:11:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:14:35.184 12:11:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:14:35.184 12:11:28 -- spdk/autotest.sh@72 -- # hash lcov 00:14:35.184 12:11:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:14:35.184 12:11:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:14:35.184 --rc lcov_branch_coverage=1 00:14:35.184 --rc lcov_function_coverage=1 00:14:35.184 --rc genhtml_branch_coverage=1 00:14:35.184 --rc genhtml_function_coverage=1 00:14:35.184 --rc genhtml_legend=1 00:14:35.184 --rc geninfo_all_blocks=1 00:14:35.184 ' 00:14:35.184 12:11:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:14:35.184 --rc lcov_branch_coverage=1 00:14:35.184 --rc lcov_function_coverage=1 00:14:35.184 --rc genhtml_branch_coverage=1 00:14:35.184 --rc genhtml_function_coverage=1 00:14:35.184 --rc genhtml_legend=1 00:14:35.184 --rc geninfo_all_blocks=1 00:14:35.184 ' 00:14:35.184 12:11:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:14:35.184 --rc lcov_branch_coverage=1 00:14:35.184 --rc lcov_function_coverage=1 00:14:35.184 --rc genhtml_branch_coverage=1 00:14:35.184 --rc genhtml_function_coverage=1 00:14:35.184 --rc genhtml_legend=1 00:14:35.184 --rc geninfo_all_blocks=1 00:14:35.184 --no-external' 00:14:35.184 12:11:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:14:35.184 --rc lcov_branch_coverage=1 00:14:35.184 --rc lcov_function_coverage=1 00:14:35.184 --rc genhtml_branch_coverage=1 00:14:35.184 --rc genhtml_function_coverage=1 00:14:35.184 --rc genhtml_legend=1 00:14:35.184 --rc geninfo_all_blocks=1 00:14:35.184 --no-external' 00:14:35.184 12:11:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:14:35.442 lcov: LCOV version 1.14 00:14:35.442 12:11:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:43.554 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:14:43.554 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:14:43.554 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:14:43.554 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:14:43.554 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:14:43.554 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:14:50.116 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:50.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:15:02.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:15:02.323 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:15:02.324 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:15:02.324 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:15:05.610 12:11:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:15:05.610 12:11:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.610 12:11:58 -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 12:11:58 -- spdk/autotest.sh@91 -- # rm -f 00:15:05.610 12:11:58 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:06.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:06.177 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:06.177 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:06.177 12:11:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:15:06.177 12:11:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:06.177 12:11:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:06.177 12:11:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:06.177 12:11:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:06.177 12:11:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:06.177 12:11:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:06.177 12:11:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:06.177 12:11:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:06.177 12:11:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:06.177 12:11:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:06.177 12:11:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:15:06.177 12:11:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:15:06.177 12:11:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:06.177 12:11:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:15:06.177 12:11:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:15:06.177 12:11:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:06.177 12:11:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:06.177 12:11:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:15:06.177 12:11:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:06.177 12:11:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:06.177 12:11:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:15:06.177 12:11:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:15:06.177 12:11:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:06.177 No valid GPT data, bailing 00:15:06.177 12:11:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:06.177 12:11:59 -- scripts/common.sh@391 -- # pt= 00:15:06.177 12:11:59 -- scripts/common.sh@392 -- # return 1 00:15:06.177 12:11:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:06.177 1+0 records in 00:15:06.177 1+0 records out 00:15:06.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481553 s, 218 MB/s 00:15:06.177 12:11:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:06.177 12:11:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:06.177 12:11:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:15:06.177 12:11:59 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:15:06.177 12:11:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:15:06.435 No valid GPT data, bailing 00:15:06.435 12:11:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:06.435 12:11:59 -- scripts/common.sh@391 -- # pt= 00:15:06.435 12:11:59 -- scripts/common.sh@392 -- # return 1 00:15:06.435 12:11:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:15:06.435 1+0 records in 00:15:06.435 1+0 records out 00:15:06.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494912 s, 212 MB/s 00:15:06.435 12:11:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:06.435 12:11:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:06.435 12:11:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:15:06.435 12:11:59 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:15:06.435 12:11:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:15:06.435 No valid GPT data, bailing 00:15:06.435 12:11:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:15:06.435 12:11:59 -- scripts/common.sh@391 -- # pt= 00:15:06.435 12:11:59 -- scripts/common.sh@392 -- # return 1 00:15:06.435 12:11:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:15:06.435 1+0 records in 00:15:06.435 1+0 records out 00:15:06.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432934 s, 242 MB/s 00:15:06.435 12:11:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:06.435 12:11:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:06.435 12:11:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:15:06.435 12:11:59 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:15:06.435 12:11:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:15:06.435 No valid GPT data, bailing 00:15:06.435 12:11:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:15:06.435 12:11:59 -- scripts/common.sh@391 -- # pt= 00:15:06.435 12:11:59 -- scripts/common.sh@392 -- # return 1 00:15:06.435 12:11:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:15:06.435 1+0 records in 00:15:06.435 1+0 records out 00:15:06.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453646 s, 231 MB/s 00:15:06.435 12:11:59 -- spdk/autotest.sh@118 -- # sync 00:15:06.694 12:11:59 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:06.694 12:11:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:06.694 12:11:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:15:08.602 12:12:01 -- spdk/autotest.sh@124 -- # uname -s 00:15:08.602 12:12:01 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:15:08.602 12:12:01 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:15:08.602 12:12:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:08.602 12:12:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.602 12:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:08.602 ************************************ 00:15:08.602 START TEST setup.sh 00:15:08.602 ************************************ 00:15:08.602 12:12:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:15:08.602 * Looking for test storage... 00:15:08.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:08.602 12:12:01 -- setup/test-setup.sh@10 -- # uname -s 00:15:08.602 12:12:01 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:15:08.602 12:12:01 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:15:08.602 12:12:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:08.602 12:12:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.602 12:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:08.602 ************************************ 00:15:08.602 START TEST acl 00:15:08.602 ************************************ 00:15:08.602 12:12:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:15:08.602 * Looking for test storage... 00:15:08.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:08.602 12:12:02 -- setup/acl.sh@10 -- # get_zoned_devs 00:15:08.602 12:12:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:08.602 12:12:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:08.602 12:12:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:08.602 12:12:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:08.602 12:12:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:08.602 12:12:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:08.602 12:12:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:08.602 12:12:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:08.602 12:12:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:08.602 12:12:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:08.602 12:12:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:15:08.602 12:12:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:15:08.602 12:12:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:08.602 12:12:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:15:08.602 12:12:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:15:08.602 12:12:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:08.602 12:12:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:08.602 12:12:02 -- setup/acl.sh@12 -- # devs=() 00:15:08.602 12:12:02 -- setup/acl.sh@12 -- # declare -a devs 00:15:08.602 12:12:02 -- setup/acl.sh@13 -- # drivers=() 00:15:08.602 12:12:02 -- setup/acl.sh@13 -- # declare -A drivers 00:15:08.602 12:12:02 -- setup/acl.sh@51 -- # setup reset 00:15:08.602 12:12:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:08.602 12:12:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:09.537 12:12:02 -- setup/acl.sh@52 -- # collect_setup_devs 00:15:09.537 12:12:02 -- setup/acl.sh@16 -- # local dev driver 00:15:09.537 12:12:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:09.537 12:12:02 -- setup/acl.sh@15 -- # setup output status 00:15:09.537 12:12:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:09.537 12:12:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # continue 00:15:10.103 12:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:10.103 Hugepages 00:15:10.103 node hugesize free / total 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # continue 00:15:10.103 12:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:10.103 00:15:10.103 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # continue 00:15:10.103 12:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:15:10.103 12:12:03 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:15:10.103 12:12:03 -- setup/acl.sh@20 -- # continue 00:15:10.103 12:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:10.103 12:12:03 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:15:10.103 12:12:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:15:10.103 12:12:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:15:10.103 12:12:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:15:10.103 12:12:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:15:10.103 12:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:10.361 12:12:03 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:15:10.361 12:12:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:15:10.361 12:12:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:10.361 12:12:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:15:10.361 12:12:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:15:10.361 12:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:15:10.361 12:12:03 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:15:10.361 12:12:03 -- setup/acl.sh@54 -- # run_test denied denied 00:15:10.361 12:12:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:10.361 12:12:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.361 12:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:10.361 ************************************ 00:15:10.361 START TEST denied 00:15:10.361 ************************************ 00:15:10.361 12:12:03 -- common/autotest_common.sh@1111 -- # denied 00:15:10.361 12:12:03 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:15:10.361 12:12:03 -- setup/acl.sh@38 -- # setup output config 00:15:10.361 12:12:03 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:15:10.361 12:12:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:10.361 12:12:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:11.296 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:15:11.296 12:12:04 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:15:11.296 12:12:04 -- setup/acl.sh@28 -- # local dev driver 00:15:11.296 12:12:04 -- setup/acl.sh@30 -- # for dev in "$@" 00:15:11.296 12:12:04 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:15:11.296 12:12:04 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:15:11.296 12:12:04 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:15:11.296 12:12:04 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:15:11.296 12:12:04 -- setup/acl.sh@41 -- # setup reset 00:15:11.296 12:12:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:11.296 12:12:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:11.863 00:15:11.863 real 0m1.407s 00:15:11.863 user 0m0.569s 00:15:11.863 sys 0m0.795s 00:15:11.863 12:12:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:11.863 12:12:05 -- common/autotest_common.sh@10 -- # set +x 00:15:11.863 ************************************ 00:15:11.863 END TEST denied 00:15:11.863 ************************************ 00:15:11.863 12:12:05 -- setup/acl.sh@55 -- # run_test allowed allowed 00:15:11.863 12:12:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:11.863 12:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.863 12:12:05 -- common/autotest_common.sh@10 -- # set +x 00:15:11.863 ************************************ 00:15:11.863 START TEST allowed 00:15:11.863 ************************************ 00:15:11.863 12:12:05 -- common/autotest_common.sh@1111 -- # allowed 00:15:11.863 12:12:05 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:15:11.863 12:12:05 -- setup/acl.sh@45 -- # setup output config 00:15:11.863 12:12:05 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:15:11.863 12:12:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:11.863 12:12:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:12.800 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:12.800 12:12:06 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:15:12.800 12:12:06 -- setup/acl.sh@28 -- # local dev driver 00:15:12.800 12:12:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:15:12.800 12:12:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:15:12.800 12:12:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:15:12.800 12:12:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:15:12.800 12:12:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:15:12.800 12:12:06 -- setup/acl.sh@48 -- # setup reset 00:15:12.800 12:12:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:12.800 12:12:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:13.367 00:15:13.367 real 0m1.515s 00:15:13.367 user 0m0.666s 00:15:13.367 sys 0m0.833s 00:15:13.367 12:12:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:13.367 12:12:06 -- common/autotest_common.sh@10 -- # set +x 00:15:13.367 ************************************ 00:15:13.367 END TEST allowed 00:15:13.367 ************************************ 00:15:13.367 00:15:13.367 real 0m4.808s 00:15:13.367 user 0m2.112s 00:15:13.367 sys 0m2.620s 00:15:13.367 12:12:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:13.367 12:12:06 -- common/autotest_common.sh@10 -- # set +x 00:15:13.367 ************************************ 00:15:13.367 END TEST acl 00:15:13.367 ************************************ 00:15:13.367 12:12:06 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:15:13.367 12:12:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:13.367 12:12:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.367 12:12:06 -- common/autotest_common.sh@10 -- # set +x 00:15:13.627 ************************************ 00:15:13.627 START TEST hugepages 00:15:13.627 ************************************ 00:15:13.627 12:12:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:15:13.627 * Looking for test storage... 00:15:13.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:13.627 12:12:06 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:15:13.627 12:12:06 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:15:13.627 12:12:06 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:15:13.627 12:12:06 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:15:13.627 12:12:06 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:15:13.627 12:12:06 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:15:13.627 12:12:06 -- setup/common.sh@17 -- # local get=Hugepagesize 00:15:13.627 12:12:06 -- setup/common.sh@18 -- # local node= 00:15:13.627 12:12:06 -- setup/common.sh@19 -- # local var val 00:15:13.627 12:12:06 -- setup/common.sh@20 -- # local mem_f mem 00:15:13.627 12:12:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:13.627 12:12:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:13.627 12:12:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:13.627 12:12:06 -- setup/common.sh@28 -- # mapfile -t mem 00:15:13.627 12:12:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5592128 kB' 'MemAvailable: 7397416 kB' 'Buffers: 2952 kB' 'Cached: 2017328 kB' 'SwapCached: 0 kB' 'Active: 835220 kB' 'Inactive: 1291784 kB' 'Active(anon): 117212 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1356 kB' 'Writeback: 0 kB' 'AnonPages: 108616 kB' 'Mapped: 48752 kB' 'Shmem: 10488 kB' 'KReclaimable: 64868 kB' 'Slab: 138116 kB' 'SReclaimable: 64868 kB' 'SUnreclaim: 73248 kB' 'KernelStack: 6608 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 340432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.627 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.627 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:06 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:06 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # continue 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:13.628 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:13.628 12:12:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:15:13.628 12:12:07 -- setup/common.sh@33 -- # echo 2048 00:15:13.628 12:12:07 -- setup/common.sh@33 -- # return 0 00:15:13.628 12:12:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:15:13.628 12:12:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:15:13.628 12:12:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:15:13.628 12:12:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:15:13.628 12:12:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:15:13.628 12:12:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:15:13.628 12:12:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:15:13.628 12:12:07 -- setup/hugepages.sh@207 -- # get_nodes 00:15:13.628 12:12:07 -- setup/hugepages.sh@27 -- # local node 00:15:13.628 12:12:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:13.628 12:12:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:15:13.628 12:12:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:13.628 12:12:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:13.628 12:12:07 -- setup/hugepages.sh@208 -- # clear_hp 00:15:13.628 12:12:07 -- setup/hugepages.sh@37 -- # local node hp 00:15:13.628 12:12:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:15:13.629 12:12:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:13.629 12:12:07 -- setup/hugepages.sh@41 -- # echo 0 00:15:13.629 12:12:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:13.629 12:12:07 -- setup/hugepages.sh@41 -- # echo 0 00:15:13.629 12:12:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:15:13.629 12:12:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:15:13.629 12:12:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:15:13.629 12:12:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:13.629 12:12:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.629 12:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:13.888 ************************************ 00:15:13.888 START TEST default_setup 00:15:13.888 ************************************ 00:15:13.888 12:12:07 -- common/autotest_common.sh@1111 -- # default_setup 00:15:13.888 12:12:07 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:15:13.888 12:12:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:15:13.888 12:12:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:13.888 12:12:07 -- setup/hugepages.sh@51 -- # shift 00:15:13.888 12:12:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:15:13.888 12:12:07 -- setup/hugepages.sh@52 -- # local node_ids 00:15:13.888 12:12:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:13.888 12:12:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:13.888 12:12:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:13.888 12:12:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:15:13.888 12:12:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:13.888 12:12:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:13.888 12:12:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:13.888 12:12:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:13.888 12:12:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:13.888 12:12:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:13.888 12:12:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:13.888 12:12:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:15:13.888 12:12:07 -- setup/hugepages.sh@73 -- # return 0 00:15:13.888 12:12:07 -- setup/hugepages.sh@137 -- # setup output 00:15:13.888 12:12:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:13.888 12:12:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:14.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:14.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:14.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:14.717 12:12:07 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:15:14.717 12:12:07 -- setup/hugepages.sh@89 -- # local node 00:15:14.717 12:12:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:14.717 12:12:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:14.717 12:12:07 -- setup/hugepages.sh@92 -- # local surp 00:15:14.717 12:12:07 -- setup/hugepages.sh@93 -- # local resv 00:15:14.717 12:12:07 -- setup/hugepages.sh@94 -- # local anon 00:15:14.717 12:12:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:14.717 12:12:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:14.717 12:12:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:14.717 12:12:07 -- setup/common.sh@18 -- # local node= 00:15:14.717 12:12:07 -- setup/common.sh@19 -- # local var val 00:15:14.717 12:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:15:14.717 12:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:14.717 12:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:14.717 12:12:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:14.717 12:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:15:14.717 12:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7693340 kB' 'MemAvailable: 9498528 kB' 'Buffers: 2952 kB' 'Cached: 2017360 kB' 'SwapCached: 0 kB' 'Active: 852040 kB' 'Inactive: 1291828 kB' 'Active(anon): 134032 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 125168 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137944 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6560 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.717 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.717 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.718 12:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:14.718 12:12:07 -- setup/common.sh@33 -- # echo 0 00:15:14.718 12:12:07 -- setup/common.sh@33 -- # return 0 00:15:14.718 12:12:07 -- setup/hugepages.sh@97 -- # anon=0 00:15:14.718 12:12:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:14.718 12:12:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:14.718 12:12:07 -- setup/common.sh@18 -- # local node= 00:15:14.718 12:12:07 -- setup/common.sh@19 -- # local var val 00:15:14.718 12:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:15:14.718 12:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:14.718 12:12:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:14.718 12:12:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:14.718 12:12:07 -- setup/common.sh@28 -- # mapfile -t mem 00:15:14.718 12:12:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:14.718 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7693340 kB' 'MemAvailable: 9498528 kB' 'Buffers: 2952 kB' 'Cached: 2017360 kB' 'SwapCached: 0 kB' 'Active: 852080 kB' 'Inactive: 1291828 kB' 'Active(anon): 134072 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 125180 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137944 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6528 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.719 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.719 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:07 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.720 12:12:07 -- setup/common.sh@33 -- # echo 0 00:15:14.720 12:12:07 -- setup/common.sh@33 -- # return 0 00:15:14.720 12:12:07 -- setup/hugepages.sh@99 -- # surp=0 00:15:14.720 12:12:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:14.720 12:12:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:14.720 12:12:07 -- setup/common.sh@18 -- # local node= 00:15:14.720 12:12:07 -- setup/common.sh@19 -- # local var val 00:15:14.720 12:12:07 -- setup/common.sh@20 -- # local mem_f mem 00:15:14.720 12:12:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:14.720 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:14.720 12:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:14.720 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:14.720 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7693340 kB' 'MemAvailable: 9498540 kB' 'Buffers: 2952 kB' 'Cached: 2017360 kB' 'SwapCached: 0 kB' 'Active: 851716 kB' 'Inactive: 1291840 kB' 'Active(anon): 133708 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 124808 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137940 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73356 kB' 'KernelStack: 6528 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.720 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.720 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.721 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:14.721 12:12:08 -- setup/common.sh@33 -- # echo 0 00:15:14.721 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:14.721 12:12:08 -- setup/hugepages.sh@100 -- # resv=0 00:15:14.721 nr_hugepages=1024 00:15:14.721 12:12:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:14.721 resv_hugepages=0 00:15:14.721 12:12:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:14.721 surplus_hugepages=0 00:15:14.721 12:12:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:14.721 anon_hugepages=0 00:15:14.721 12:12:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:14.721 12:12:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:14.721 12:12:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:14.721 12:12:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:14.721 12:12:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:14.721 12:12:08 -- setup/common.sh@18 -- # local node= 00:15:14.721 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:14.721 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:14.721 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:14.721 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:14.721 12:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:14.721 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:14.721 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.721 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7693340 kB' 'MemAvailable: 9498540 kB' 'Buffers: 2952 kB' 'Cached: 2017360 kB' 'SwapCached: 0 kB' 'Active: 851436 kB' 'Inactive: 1291840 kB' 'Active(anon): 133428 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'AnonPages: 124600 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137940 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73356 kB' 'KernelStack: 6560 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.722 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.722 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:14.723 12:12:08 -- setup/common.sh@33 -- # echo 1024 00:15:14.723 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:14.723 12:12:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:14.723 12:12:08 -- setup/hugepages.sh@112 -- # get_nodes 00:15:14.723 12:12:08 -- setup/hugepages.sh@27 -- # local node 00:15:14.723 12:12:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:14.723 12:12:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:14.723 12:12:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:14.723 12:12:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:14.723 12:12:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:14.723 12:12:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:14.723 12:12:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:14.723 12:12:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:14.723 12:12:08 -- setup/common.sh@18 -- # local node=0 00:15:14.723 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:14.723 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:14.723 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:14.723 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:14.723 12:12:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:14.723 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:14.723 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7693340 kB' 'MemUsed: 4548640 kB' 'SwapCached: 0 kB' 'Active: 851712 kB' 'Inactive: 1291840 kB' 'Active(anon): 133704 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 540 kB' 'Writeback: 0 kB' 'FilePages: 2020312 kB' 'Mapped: 48772 kB' 'AnonPages: 124872 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64584 kB' 'Slab: 137940 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.723 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.723 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # continue 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:14.724 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:14.724 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:14.724 12:12:08 -- setup/common.sh@33 -- # echo 0 00:15:14.724 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:14.724 12:12:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:14.724 12:12:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:14.724 12:12:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:14.724 12:12:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:14.724 12:12:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:14.724 node0=1024 expecting 1024 00:15:14.724 12:12:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:14.724 00:15:14.724 real 0m0.989s 00:15:14.724 user 0m0.466s 00:15:14.724 sys 0m0.476s 00:15:14.724 12:12:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.724 12:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:14.724 ************************************ 00:15:14.724 END TEST default_setup 00:15:14.724 ************************************ 00:15:14.724 12:12:08 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:15:14.724 12:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:14.724 12:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.724 12:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:14.983 ************************************ 00:15:14.983 START TEST per_node_1G_alloc 00:15:14.983 ************************************ 00:15:14.983 12:12:08 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:15:14.983 12:12:08 -- setup/hugepages.sh@143 -- # local IFS=, 00:15:14.983 12:12:08 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:15:14.983 12:12:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:15:14.983 12:12:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:14.983 12:12:08 -- setup/hugepages.sh@51 -- # shift 00:15:14.983 12:12:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:15:14.983 12:12:08 -- setup/hugepages.sh@52 -- # local node_ids 00:15:14.983 12:12:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:14.983 12:12:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:15:14.983 12:12:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:14.983 12:12:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:15:14.983 12:12:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:14.983 12:12:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:14.983 12:12:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:14.983 12:12:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:14.983 12:12:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:14.983 12:12:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:14.983 12:12:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:14.983 12:12:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:15:14.983 12:12:08 -- setup/hugepages.sh@73 -- # return 0 00:15:14.983 12:12:08 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:15:14.983 12:12:08 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:15:14.983 12:12:08 -- setup/hugepages.sh@146 -- # setup output 00:15:14.983 12:12:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:14.983 12:12:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:15.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.300 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.300 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.300 12:12:08 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:15:15.300 12:12:08 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:15:15.300 12:12:08 -- setup/hugepages.sh@89 -- # local node 00:15:15.300 12:12:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:15.300 12:12:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:15.300 12:12:08 -- setup/hugepages.sh@92 -- # local surp 00:15:15.300 12:12:08 -- setup/hugepages.sh@93 -- # local resv 00:15:15.300 12:12:08 -- setup/hugepages.sh@94 -- # local anon 00:15:15.300 12:12:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:15.300 12:12:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:15.300 12:12:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:15.300 12:12:08 -- setup/common.sh@18 -- # local node= 00:15:15.300 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:15.300 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.300 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.300 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.300 12:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.300 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.300 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8748772 kB' 'MemAvailable: 10553976 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 852300 kB' 'Inactive: 1291844 kB' 'Active(anon): 134292 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 125420 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137988 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73404 kB' 'KernelStack: 6532 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.300 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.300 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.301 12:12:08 -- setup/common.sh@33 -- # echo 0 00:15:15.301 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:15.301 12:12:08 -- setup/hugepages.sh@97 -- # anon=0 00:15:15.301 12:12:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:15.301 12:12:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:15.301 12:12:08 -- setup/common.sh@18 -- # local node= 00:15:15.301 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:15.301 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.301 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.301 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.301 12:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.301 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.301 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8748772 kB' 'MemAvailable: 10553976 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 851652 kB' 'Inactive: 1291844 kB' 'Active(anon): 133644 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 124740 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137988 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73404 kB' 'KernelStack: 6500 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.301 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.301 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.302 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.302 12:12:08 -- setup/common.sh@33 -- # echo 0 00:15:15.302 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:15.302 12:12:08 -- setup/hugepages.sh@99 -- # surp=0 00:15:15.302 12:12:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:15.302 12:12:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:15.302 12:12:08 -- setup/common.sh@18 -- # local node= 00:15:15.302 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:15.302 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.302 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.302 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.302 12:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.302 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.302 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.302 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8748772 kB' 'MemAvailable: 10553976 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 851852 kB' 'Inactive: 1291844 kB' 'Active(anon): 133844 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'AnonPages: 124944 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137984 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73400 kB' 'KernelStack: 6468 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.303 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.303 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.304 12:12:08 -- setup/common.sh@33 -- # echo 0 00:15:15.304 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:15.304 12:12:08 -- setup/hugepages.sh@100 -- # resv=0 00:15:15.304 nr_hugepages=512 00:15:15.304 12:12:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:15:15.304 12:12:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:15.304 resv_hugepages=0 00:15:15.304 surplus_hugepages=0 00:15:15.304 12:12:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:15.304 anon_hugepages=0 00:15:15.304 12:12:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:15.304 12:12:08 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:15.304 12:12:08 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:15:15.304 12:12:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:15.304 12:12:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:15.304 12:12:08 -- setup/common.sh@18 -- # local node= 00:15:15.304 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:15.304 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.304 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.304 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.304 12:12:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.304 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.304 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8748772 kB' 'MemAvailable: 10553976 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 851744 kB' 'Inactive: 1291844 kB' 'Active(anon): 133736 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'AnonPages: 124872 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137980 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73396 kB' 'KernelStack: 6544 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.304 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.304 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.305 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.305 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.305 12:12:08 -- setup/common.sh@33 -- # echo 512 00:15:15.305 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:15.305 12:12:08 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:15.305 12:12:08 -- setup/hugepages.sh@112 -- # get_nodes 00:15:15.305 12:12:08 -- setup/hugepages.sh@27 -- # local node 00:15:15.305 12:12:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:15.305 12:12:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:15:15.305 12:12:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:15.305 12:12:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:15.305 12:12:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:15.305 12:12:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:15.305 12:12:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:15.305 12:12:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:15.305 12:12:08 -- setup/common.sh@18 -- # local node=0 00:15:15.305 12:12:08 -- setup/common.sh@19 -- # local var val 00:15:15.305 12:12:08 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.306 12:12:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.306 12:12:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:15.306 12:12:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:15.306 12:12:08 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.306 12:12:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8749116 kB' 'MemUsed: 3492864 kB' 'SwapCached: 0 kB' 'Active: 851680 kB' 'Inactive: 1291844 kB' 'Active(anon): 133672 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'FilePages: 2020316 kB' 'Mapped: 48792 kB' 'AnonPages: 124804 kB' 'Shmem: 10464 kB' 'KernelStack: 6528 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64584 kB' 'Slab: 137980 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # continue 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.306 12:12:08 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.306 12:12:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.306 12:12:08 -- setup/common.sh@33 -- # echo 0 00:15:15.307 12:12:08 -- setup/common.sh@33 -- # return 0 00:15:15.307 12:12:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:15.307 12:12:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:15.307 12:12:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:15.307 12:12:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:15.307 node0=512 expecting 512 00:15:15.307 12:12:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:15:15.307 12:12:08 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:15:15.307 00:15:15.307 real 0m0.505s 00:15:15.307 user 0m0.259s 00:15:15.307 sys 0m0.279s 00:15:15.307 12:12:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.307 12:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:15.307 ************************************ 00:15:15.307 END TEST per_node_1G_alloc 00:15:15.307 ************************************ 00:15:15.307 12:12:08 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:15:15.307 12:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:15.307 12:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.307 12:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:15.581 ************************************ 00:15:15.581 START TEST even_2G_alloc 00:15:15.581 ************************************ 00:15:15.581 12:12:08 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:15:15.581 12:12:08 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:15:15.581 12:12:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:15:15.581 12:12:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:15.581 12:12:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:15.581 12:12:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:15.581 12:12:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:15.581 12:12:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:15.581 12:12:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:15.581 12:12:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:15.581 12:12:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:15.581 12:12:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:15.581 12:12:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:15.581 12:12:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:15.581 12:12:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:15.581 12:12:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:15.581 12:12:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:15:15.581 12:12:08 -- setup/hugepages.sh@83 -- # : 0 00:15:15.581 12:12:08 -- setup/hugepages.sh@84 -- # : 0 00:15:15.581 12:12:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:15.581 12:12:08 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:15:15.581 12:12:08 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:15:15.581 12:12:08 -- setup/hugepages.sh@153 -- # setup output 00:15:15.581 12:12:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:15.581 12:12:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:15.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:15.841 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.841 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:15.841 12:12:09 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:15:15.841 12:12:09 -- setup/hugepages.sh@89 -- # local node 00:15:15.841 12:12:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:15.841 12:12:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:15.841 12:12:09 -- setup/hugepages.sh@92 -- # local surp 00:15:15.841 12:12:09 -- setup/hugepages.sh@93 -- # local resv 00:15:15.841 12:12:09 -- setup/hugepages.sh@94 -- # local anon 00:15:15.841 12:12:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:15.841 12:12:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:15.841 12:12:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:15.841 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:15.841 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:15.841 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.841 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.841 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.841 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.841 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.841 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7696964 kB' 'MemAvailable: 9502168 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 852140 kB' 'Inactive: 1291844 kB' 'Active(anon): 134132 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1092 kB' 'Writeback: 0 kB' 'AnonPages: 125296 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 137996 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73412 kB' 'KernelStack: 6548 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.841 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.841 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:15.842 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:15.842 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:15.842 12:12:09 -- setup/hugepages.sh@97 -- # anon=0 00:15:15.842 12:12:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:15.842 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:15.842 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:15.842 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:15.842 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.842 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.842 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.842 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.842 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.842 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7696964 kB' 'MemAvailable: 9502168 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 851784 kB' 'Inactive: 1291844 kB' 'Active(anon): 133776 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1092 kB' 'Writeback: 0 kB' 'AnonPages: 124900 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138024 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73440 kB' 'KernelStack: 6560 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:15.842 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:15.842 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:15.842 12:12:09 -- setup/hugepages.sh@99 -- # surp=0 00:15:15.842 12:12:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:15.842 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:15.842 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:15.842 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:15.842 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.842 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.842 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.842 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.842 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.842 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7697216 kB' 'MemAvailable: 9502420 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 851512 kB' 'Inactive: 1291844 kB' 'Active(anon): 133504 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1092 kB' 'Writeback: 0 kB' 'AnonPages: 124644 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138024 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73440 kB' 'KernelStack: 6560 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.842 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.842 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:15.843 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:15.843 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:15.843 12:12:09 -- setup/hugepages.sh@100 -- # resv=0 00:15:15.843 nr_hugepages=1024 00:15:15.843 12:12:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:15.843 resv_hugepages=0 00:15:15.843 12:12:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:15.843 surplus_hugepages=0 00:15:15.843 12:12:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:15.843 anon_hugepages=0 00:15:15.843 12:12:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:15.843 12:12:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:15.843 12:12:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:15.843 12:12:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:15.843 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:15.843 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:15.843 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:15.843 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:15.843 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:15.843 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:15.843 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:15.843 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:15.843 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7697216 kB' 'MemAvailable: 9502420 kB' 'Buffers: 2952 kB' 'Cached: 2017364 kB' 'SwapCached: 0 kB' 'Active: 851752 kB' 'Inactive: 1291844 kB' 'Active(anon): 133744 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1092 kB' 'Writeback: 0 kB' 'AnonPages: 124884 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138024 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73440 kB' 'KernelStack: 6560 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:15.843 12:12:09 -- setup/common.sh@32 -- # continue 00:15:15.843 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.103 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.103 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.104 12:12:09 -- setup/common.sh@33 -- # echo 1024 00:15:16.104 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.104 12:12:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:16.104 12:12:09 -- setup/hugepages.sh@112 -- # get_nodes 00:15:16.104 12:12:09 -- setup/hugepages.sh@27 -- # local node 00:15:16.104 12:12:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:16.104 12:12:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:16.104 12:12:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:16.104 12:12:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:16.104 12:12:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:16.104 12:12:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:16.104 12:12:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:16.104 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:16.104 12:12:09 -- setup/common.sh@18 -- # local node=0 00:15:16.104 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:16.104 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:16.104 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:16.104 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:16.104 12:12:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:16.104 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:16.104 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7698228 kB' 'MemUsed: 4543752 kB' 'SwapCached: 0 kB' 'Active: 851708 kB' 'Inactive: 1291844 kB' 'Active(anon): 133700 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1092 kB' 'Writeback: 0 kB' 'FilePages: 2020316 kB' 'Mapped: 48804 kB' 'AnonPages: 124832 kB' 'Shmem: 10464 kB' 'KernelStack: 6528 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64584 kB' 'Slab: 138024 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.104 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.104 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.105 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.105 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.105 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:16.105 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.105 12:12:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:16.105 12:12:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:16.105 12:12:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:16.105 node0=1024 expecting 1024 00:15:16.105 12:12:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:16.105 12:12:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:16.105 00:15:16.105 real 0m0.518s 00:15:16.105 user 0m0.261s 00:15:16.105 sys 0m0.291s 00:15:16.105 12:12:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.105 12:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.105 ************************************ 00:15:16.105 END TEST even_2G_alloc 00:15:16.105 ************************************ 00:15:16.105 12:12:09 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:15:16.105 12:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:16.105 12:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.105 12:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.105 ************************************ 00:15:16.105 START TEST odd_alloc 00:15:16.105 ************************************ 00:15:16.105 12:12:09 -- common/autotest_common.sh@1111 -- # odd_alloc 00:15:16.105 12:12:09 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:15:16.105 12:12:09 -- setup/hugepages.sh@49 -- # local size=2098176 00:15:16.105 12:12:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:15:16.105 12:12:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:16.105 12:12:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:16.105 12:12:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:16.105 12:12:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:15:16.105 12:12:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:16.105 12:12:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:16.105 12:12:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:16.105 12:12:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:15:16.105 12:12:09 -- setup/hugepages.sh@83 -- # : 0 00:15:16.105 12:12:09 -- setup/hugepages.sh@84 -- # : 0 00:15:16.105 12:12:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:16.105 12:12:09 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:15:16.105 12:12:09 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:15:16.105 12:12:09 -- setup/hugepages.sh@160 -- # setup output 00:15:16.105 12:12:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:16.105 12:12:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:16.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:16.364 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:16.364 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:16.626 12:12:09 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:15:16.626 12:12:09 -- setup/hugepages.sh@89 -- # local node 00:15:16.626 12:12:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:16.626 12:12:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:16.626 12:12:09 -- setup/hugepages.sh@92 -- # local surp 00:15:16.626 12:12:09 -- setup/hugepages.sh@93 -- # local resv 00:15:16.626 12:12:09 -- setup/hugepages.sh@94 -- # local anon 00:15:16.626 12:12:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:16.626 12:12:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:16.626 12:12:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:16.626 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:16.626 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:16.626 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:16.626 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:16.626 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:16.626 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:16.626 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:16.626 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:16.626 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.626 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7691940 kB' 'MemAvailable: 9497148 kB' 'Buffers: 2952 kB' 'Cached: 2017368 kB' 'SwapCached: 0 kB' 'Active: 851928 kB' 'Inactive: 1291848 kB' 'Active(anon): 133920 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1288 kB' 'Writeback: 0 kB' 'AnonPages: 125056 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138048 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73464 kB' 'KernelStack: 6624 kB' 'PageTables: 4708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:16.626 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.626 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.626 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.626 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:16.627 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:16.627 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.627 12:12:09 -- setup/hugepages.sh@97 -- # anon=0 00:15:16.627 12:12:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:16.627 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:16.627 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:16.627 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:16.627 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:16.627 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:16.627 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:16.627 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:16.627 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:16.627 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7691940 kB' 'MemAvailable: 9497148 kB' 'Buffers: 2952 kB' 'Cached: 2017368 kB' 'SwapCached: 0 kB' 'Active: 851572 kB' 'Inactive: 1291848 kB' 'Active(anon): 133564 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1288 kB' 'Writeback: 0 kB' 'AnonPages: 124672 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138048 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73464 kB' 'KernelStack: 6544 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.627 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.627 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.628 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:16.628 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.628 12:12:09 -- setup/hugepages.sh@99 -- # surp=0 00:15:16.628 12:12:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:16.628 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:16.628 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:16.628 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:16.628 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:16.628 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:16.628 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:16.628 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:16.628 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:16.628 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7691940 kB' 'MemAvailable: 9497148 kB' 'Buffers: 2952 kB' 'Cached: 2017368 kB' 'SwapCached: 0 kB' 'Active: 851748 kB' 'Inactive: 1291848 kB' 'Active(anon): 133740 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1288 kB' 'Writeback: 0 kB' 'AnonPages: 124888 kB' 'Mapped: 49080 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138048 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73464 kB' 'KernelStack: 6576 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.628 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.628 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:16.629 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:16.629 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.629 12:12:09 -- setup/hugepages.sh@100 -- # resv=0 00:15:16.629 nr_hugepages=1025 00:15:16.629 12:12:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:15:16.629 resv_hugepages=0 00:15:16.629 12:12:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:16.629 12:12:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:16.629 surplus_hugepages=0 00:15:16.629 anon_hugepages=0 00:15:16.629 12:12:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:16.629 12:12:09 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:15:16.629 12:12:09 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:15:16.629 12:12:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:16.629 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:16.629 12:12:09 -- setup/common.sh@18 -- # local node= 00:15:16.629 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:16.629 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:16.629 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:16.629 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:16.629 12:12:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:16.629 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:16.629 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7691940 kB' 'MemAvailable: 9497148 kB' 'Buffers: 2952 kB' 'Cached: 2017368 kB' 'SwapCached: 0 kB' 'Active: 851560 kB' 'Inactive: 1291848 kB' 'Active(anon): 133552 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1288 kB' 'Writeback: 0 kB' 'AnonPages: 124424 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138044 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6512 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.629 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.629 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.630 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:16.630 12:12:09 -- setup/common.sh@33 -- # echo 1025 00:15:16.630 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.630 12:12:09 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:15:16.630 12:12:09 -- setup/hugepages.sh@112 -- # get_nodes 00:15:16.630 12:12:09 -- setup/hugepages.sh@27 -- # local node 00:15:16.630 12:12:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:16.630 12:12:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:15:16.630 12:12:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:16.630 12:12:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:16.630 12:12:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:16.630 12:12:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:16.630 12:12:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:16.630 12:12:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:16.630 12:12:09 -- setup/common.sh@18 -- # local node=0 00:15:16.630 12:12:09 -- setup/common.sh@19 -- # local var val 00:15:16.630 12:12:09 -- setup/common.sh@20 -- # local mem_f mem 00:15:16.630 12:12:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:16.630 12:12:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:16.630 12:12:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:16.630 12:12:09 -- setup/common.sh@28 -- # mapfile -t mem 00:15:16.630 12:12:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.630 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7691940 kB' 'MemUsed: 4550040 kB' 'SwapCached: 0 kB' 'Active: 851456 kB' 'Inactive: 1291848 kB' 'Active(anon): 133448 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1288 kB' 'Writeback: 0 kB' 'FilePages: 2020320 kB' 'Mapped: 48820 kB' 'AnonPages: 124596 kB' 'Shmem: 10464 kB' 'KernelStack: 6580 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64584 kB' 'Slab: 138044 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # continue 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # IFS=': ' 00:15:16.631 12:12:09 -- setup/common.sh@31 -- # read -r var val _ 00:15:16.631 12:12:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:16.631 12:12:09 -- setup/common.sh@33 -- # echo 0 00:15:16.631 12:12:09 -- setup/common.sh@33 -- # return 0 00:15:16.631 12:12:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:16.631 12:12:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:16.631 12:12:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:16.631 12:12:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:16.631 node0=1025 expecting 1025 00:15:16.631 12:12:09 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:15:16.631 12:12:09 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:15:16.631 00:15:16.631 real 0m0.510s 00:15:16.631 user 0m0.228s 00:15:16.631 sys 0m0.314s 00:15:16.631 12:12:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.631 12:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.631 ************************************ 00:15:16.631 END TEST odd_alloc 00:15:16.631 ************************************ 00:15:16.631 12:12:10 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:15:16.631 12:12:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:16.631 12:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.631 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:15:16.890 ************************************ 00:15:16.890 START TEST custom_alloc 00:15:16.890 ************************************ 00:15:16.890 12:12:10 -- common/autotest_common.sh@1111 -- # custom_alloc 00:15:16.890 12:12:10 -- setup/hugepages.sh@167 -- # local IFS=, 00:15:16.890 12:12:10 -- setup/hugepages.sh@169 -- # local node 00:15:16.890 12:12:10 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:15:16.890 12:12:10 -- setup/hugepages.sh@170 -- # local nodes_hp 00:15:16.890 12:12:10 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:15:16.890 12:12:10 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:15:16.890 12:12:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:15:16.890 12:12:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:15:16.890 12:12:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:16.890 12:12:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:16.890 12:12:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:16.890 12:12:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:16.890 12:12:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:16.890 12:12:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:16.890 12:12:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:16.890 12:12:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:15:16.890 12:12:10 -- setup/hugepages.sh@83 -- # : 0 00:15:16.890 12:12:10 -- setup/hugepages.sh@84 -- # : 0 00:15:16.890 12:12:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:15:16.890 12:12:10 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:15:16.890 12:12:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:15:16.890 12:12:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:15:16.890 12:12:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:16.890 12:12:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:16.890 12:12:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:16.890 12:12:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:16.890 12:12:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:16.890 12:12:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:16.890 12:12:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:15:16.890 12:12:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:15:16.890 12:12:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:15:16.890 12:12:10 -- setup/hugepages.sh@78 -- # return 0 00:15:16.890 12:12:10 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:15:16.890 12:12:10 -- setup/hugepages.sh@187 -- # setup output 00:15:16.890 12:12:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:16.890 12:12:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:17.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.152 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:17.152 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:17.152 12:12:10 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:15:17.152 12:12:10 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:15:17.152 12:12:10 -- setup/hugepages.sh@89 -- # local node 00:15:17.152 12:12:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:17.152 12:12:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:17.152 12:12:10 -- setup/hugepages.sh@92 -- # local surp 00:15:17.152 12:12:10 -- setup/hugepages.sh@93 -- # local resv 00:15:17.152 12:12:10 -- setup/hugepages.sh@94 -- # local anon 00:15:17.152 12:12:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:17.152 12:12:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:17.152 12:12:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:17.152 12:12:10 -- setup/common.sh@18 -- # local node= 00:15:17.152 12:12:10 -- setup/common.sh@19 -- # local var val 00:15:17.152 12:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.152 12:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.152 12:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.152 12:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.152 12:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.152 12:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8744032 kB' 'MemAvailable: 10549276 kB' 'Buffers: 2952 kB' 'Cached: 2017404 kB' 'SwapCached: 0 kB' 'Active: 851748 kB' 'Inactive: 1291884 kB' 'Active(anon): 133740 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291884 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124924 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138048 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73464 kB' 'KernelStack: 6580 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.152 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.152 12:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.153 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.153 12:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.153 12:12:10 -- setup/common.sh@33 -- # echo 0 00:15:17.153 12:12:10 -- setup/common.sh@33 -- # return 0 00:15:17.154 12:12:10 -- setup/hugepages.sh@97 -- # anon=0 00:15:17.154 12:12:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:17.154 12:12:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:17.154 12:12:10 -- setup/common.sh@18 -- # local node= 00:15:17.154 12:12:10 -- setup/common.sh@19 -- # local var val 00:15:17.154 12:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.154 12:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.154 12:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.154 12:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.154 12:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.154 12:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8744032 kB' 'MemAvailable: 10549276 kB' 'Buffers: 2952 kB' 'Cached: 2017404 kB' 'SwapCached: 0 kB' 'Active: 851656 kB' 'Inactive: 1291884 kB' 'Active(anon): 133648 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291884 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124832 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138044 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6560 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.154 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.154 12:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.155 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.155 12:12:10 -- setup/common.sh@33 -- # echo 0 00:15:17.155 12:12:10 -- setup/common.sh@33 -- # return 0 00:15:17.155 12:12:10 -- setup/hugepages.sh@99 -- # surp=0 00:15:17.155 12:12:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:17.155 12:12:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:17.155 12:12:10 -- setup/common.sh@18 -- # local node= 00:15:17.155 12:12:10 -- setup/common.sh@19 -- # local var val 00:15:17.155 12:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.155 12:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.155 12:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.155 12:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.155 12:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.155 12:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.155 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8744032 kB' 'MemAvailable: 10549276 kB' 'Buffers: 2952 kB' 'Cached: 2017404 kB' 'SwapCached: 0 kB' 'Active: 851916 kB' 'Inactive: 1291884 kB' 'Active(anon): 133908 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291884 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 125084 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138044 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6560 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.156 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.156 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.157 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.157 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.157 12:12:10 -- setup/common.sh@33 -- # echo 0 00:15:17.157 12:12:10 -- setup/common.sh@33 -- # return 0 00:15:17.157 12:12:10 -- setup/hugepages.sh@100 -- # resv=0 00:15:17.157 nr_hugepages=512 00:15:17.157 12:12:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:15:17.157 resv_hugepages=0 00:15:17.157 12:12:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:17.157 surplus_hugepages=0 00:15:17.157 12:12:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:17.157 anon_hugepages=0 00:15:17.157 12:12:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:17.157 12:12:10 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:17.157 12:12:10 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:15:17.157 12:12:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:17.157 12:12:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:17.157 12:12:10 -- setup/common.sh@18 -- # local node= 00:15:17.157 12:12:10 -- setup/common.sh@19 -- # local var val 00:15:17.157 12:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.157 12:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.157 12:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.157 12:12:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.158 12:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.158 12:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8744032 kB' 'MemAvailable: 10549276 kB' 'Buffers: 2952 kB' 'Cached: 2017404 kB' 'SwapCached: 0 kB' 'Active: 851768 kB' 'Inactive: 1291884 kB' 'Active(anon): 133760 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291884 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124940 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138044 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73460 kB' 'KernelStack: 6560 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.158 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.158 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.159 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.159 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.418 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.418 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.419 12:12:10 -- setup/common.sh@33 -- # echo 512 00:15:17.419 12:12:10 -- setup/common.sh@33 -- # return 0 00:15:17.419 12:12:10 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:17.419 12:12:10 -- setup/hugepages.sh@112 -- # get_nodes 00:15:17.419 12:12:10 -- setup/hugepages.sh@27 -- # local node 00:15:17.419 12:12:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:17.419 12:12:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:15:17.419 12:12:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:17.419 12:12:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:17.419 12:12:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:17.419 12:12:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:17.419 12:12:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:17.419 12:12:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:17.419 12:12:10 -- setup/common.sh@18 -- # local node=0 00:15:17.419 12:12:10 -- setup/common.sh@19 -- # local var val 00:15:17.419 12:12:10 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.419 12:12:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.419 12:12:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:17.419 12:12:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:17.419 12:12:10 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.419 12:12:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8744628 kB' 'MemUsed: 3497352 kB' 'SwapCached: 0 kB' 'Active: 851712 kB' 'Inactive: 1291884 kB' 'Active(anon): 133704 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291884 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'FilePages: 2020356 kB' 'Mapped: 48832 kB' 'AnonPages: 124832 kB' 'Shmem: 10464 kB' 'KernelStack: 6544 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64584 kB' 'Slab: 138040 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.419 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.419 12:12:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # continue 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.420 12:12:10 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.420 12:12:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.420 12:12:10 -- setup/common.sh@33 -- # echo 0 00:15:17.420 12:12:10 -- setup/common.sh@33 -- # return 0 00:15:17.420 12:12:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:17.420 12:12:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:17.420 12:12:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:17.420 12:12:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:17.420 node0=512 expecting 512 00:15:17.420 12:12:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:15:17.420 12:12:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:15:17.420 00:15:17.420 real 0m0.564s 00:15:17.420 user 0m0.284s 00:15:17.420 sys 0m0.311s 00:15:17.420 12:12:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.420 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:15:17.420 ************************************ 00:15:17.420 END TEST custom_alloc 00:15:17.420 ************************************ 00:15:17.420 12:12:10 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:15:17.420 12:12:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:17.420 12:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.420 12:12:10 -- common/autotest_common.sh@10 -- # set +x 00:15:17.420 ************************************ 00:15:17.420 START TEST no_shrink_alloc 00:15:17.420 ************************************ 00:15:17.420 12:12:10 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:15:17.420 12:12:10 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:15:17.420 12:12:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:15:17.420 12:12:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:17.420 12:12:10 -- setup/hugepages.sh@51 -- # shift 00:15:17.420 12:12:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:15:17.420 12:12:10 -- setup/hugepages.sh@52 -- # local node_ids 00:15:17.420 12:12:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:17.420 12:12:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:17.420 12:12:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:17.420 12:12:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:15:17.420 12:12:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:17.420 12:12:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:17.420 12:12:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:17.420 12:12:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:17.420 12:12:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:17.420 12:12:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:17.420 12:12:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:17.420 12:12:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:15:17.420 12:12:10 -- setup/hugepages.sh@73 -- # return 0 00:15:17.420 12:12:10 -- setup/hugepages.sh@198 -- # setup output 00:15:17.421 12:12:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:17.421 12:12:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:17.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.679 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:17.679 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:17.941 12:12:11 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:15:17.941 12:12:11 -- setup/hugepages.sh@89 -- # local node 00:15:17.941 12:12:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:17.941 12:12:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:17.941 12:12:11 -- setup/hugepages.sh@92 -- # local surp 00:15:17.941 12:12:11 -- setup/hugepages.sh@93 -- # local resv 00:15:17.941 12:12:11 -- setup/hugepages.sh@94 -- # local anon 00:15:17.941 12:12:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:17.941 12:12:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:17.941 12:12:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:17.941 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:17.941 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:17.941 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.941 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.941 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.941 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.941 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.942 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7695620 kB' 'MemAvailable: 9500872 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 852028 kB' 'Inactive: 1291892 kB' 'Active(anon): 134020 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 125172 kB' 'Mapped: 48964 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138056 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73472 kB' 'KernelStack: 6532 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.942 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.942 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:17.943 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:17.943 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:17.943 12:12:11 -- setup/hugepages.sh@97 -- # anon=0 00:15:17.943 12:12:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:17.943 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:17.943 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:17.943 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:17.943 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.943 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.943 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.943 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.943 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.943 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7695620 kB' 'MemAvailable: 9500872 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 851792 kB' 'Inactive: 1291892 kB' 'Active(anon): 133784 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 124940 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138056 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73472 kB' 'KernelStack: 6576 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.943 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.943 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.944 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:17.944 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:17.944 12:12:11 -- setup/hugepages.sh@99 -- # surp=0 00:15:17.944 12:12:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:17.944 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:17.944 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:17.944 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:17.944 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.944 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.944 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.944 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.944 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.944 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7696920 kB' 'MemAvailable: 9502172 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 846904 kB' 'Inactive: 1291892 kB' 'Active(anon): 128896 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 48324 kB' 'Shmem: 10464 kB' 'KReclaimable: 64584 kB' 'Slab: 138052 kB' 'SReclaimable: 64584 kB' 'SUnreclaim: 73468 kB' 'KernelStack: 6528 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.944 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.944 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.945 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.945 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:17.946 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:17.946 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:17.946 12:12:11 -- setup/hugepages.sh@100 -- # resv=0 00:15:17.946 nr_hugepages=1024 00:15:17.946 12:12:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:17.946 resv_hugepages=0 00:15:17.946 12:12:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:17.946 surplus_hugepages=0 00:15:17.946 12:12:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:17.946 anon_hugepages=0 00:15:17.946 12:12:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:17.946 12:12:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:17.946 12:12:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:17.946 12:12:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:17.946 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:17.946 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:17.946 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:17.946 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.946 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.946 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:17.946 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:17.946 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.946 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7697868 kB' 'MemAvailable: 9503116 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 846080 kB' 'Inactive: 1291892 kB' 'Active(anon): 128072 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'AnonPages: 119244 kB' 'Mapped: 48104 kB' 'Shmem: 10464 kB' 'KReclaimable: 64572 kB' 'Slab: 137968 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73396 kB' 'KernelStack: 6416 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.946 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.946 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.947 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:17.947 12:12:11 -- setup/common.sh@33 -- # echo 1024 00:15:17.947 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:17.947 12:12:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:17.947 12:12:11 -- setup/hugepages.sh@112 -- # get_nodes 00:15:17.947 12:12:11 -- setup/hugepages.sh@27 -- # local node 00:15:17.947 12:12:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:17.947 12:12:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:17.947 12:12:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:17.947 12:12:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:17.947 12:12:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:17.947 12:12:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:17.947 12:12:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:17.947 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:17.947 12:12:11 -- setup/common.sh@18 -- # local node=0 00:15:17.947 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:17.947 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:17.947 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:17.947 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:17.947 12:12:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:17.947 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:17.947 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.947 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7697868 kB' 'MemUsed: 4544112 kB' 'SwapCached: 0 kB' 'Active: 846340 kB' 'Inactive: 1291892 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1584 kB' 'Writeback: 0 kB' 'FilePages: 2020364 kB' 'Mapped: 48104 kB' 'AnonPages: 119504 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64572 kB' 'Slab: 137968 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # continue 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:17.948 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:17.948 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:17.948 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:17.948 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:17.948 12:12:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:17.949 12:12:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:17.949 12:12:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:17.949 12:12:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:17.949 node0=1024 expecting 1024 00:15:17.949 12:12:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:17.949 12:12:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:17.949 12:12:11 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:15:17.949 12:12:11 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:15:17.949 12:12:11 -- setup/hugepages.sh@202 -- # setup output 00:15:17.949 12:12:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:17.949 12:12:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.208 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.208 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.208 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:15:18.208 12:12:11 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:15:18.208 12:12:11 -- setup/hugepages.sh@89 -- # local node 00:15:18.208 12:12:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:18.208 12:12:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:18.208 12:12:11 -- setup/hugepages.sh@92 -- # local surp 00:15:18.208 12:12:11 -- setup/hugepages.sh@93 -- # local resv 00:15:18.208 12:12:11 -- setup/hugepages.sh@94 -- # local anon 00:15:18.208 12:12:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:18.208 12:12:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:18.208 12:12:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:18.208 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:18.208 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:18.208 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:18.208 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:18.208 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:18.208 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:18.208 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:18.208 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7700384 kB' 'MemAvailable: 9505632 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 847360 kB' 'Inactive: 1291892 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 120468 kB' 'Mapped: 48336 kB' 'Shmem: 10464 kB' 'KReclaimable: 64572 kB' 'Slab: 137904 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73332 kB' 'KernelStack: 6552 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.208 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.208 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:18.209 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:18.209 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:18.209 12:12:11 -- setup/hugepages.sh@97 -- # anon=0 00:15:18.209 12:12:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:18.209 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:18.209 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:18.209 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:18.209 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:18.209 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:18.209 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:18.209 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:18.209 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:18.209 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7700384 kB' 'MemAvailable: 9505632 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 846772 kB' 'Inactive: 1291892 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1596 kB' 'Writeback: 0 kB' 'AnonPages: 119888 kB' 'Mapped: 48044 kB' 'Shmem: 10464 kB' 'KReclaimable: 64572 kB' 'Slab: 137896 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73324 kB' 'KernelStack: 6452 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.209 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.209 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.471 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.471 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.471 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.471 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.471 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.471 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.471 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.471 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.471 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.471 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.471 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.472 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.472 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.473 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:18.473 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:18.473 12:12:11 -- setup/hugepages.sh@99 -- # surp=0 00:15:18.473 12:12:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:18.473 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:18.473 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:18.473 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:18.473 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:18.473 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:18.473 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:18.473 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:18.473 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:18.473 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7700384 kB' 'MemAvailable: 9505632 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 846304 kB' 'Inactive: 1291892 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1596 kB' 'Writeback: 0 kB' 'AnonPages: 119452 kB' 'Mapped: 48044 kB' 'Shmem: 10464 kB' 'KReclaimable: 64572 kB' 'Slab: 137880 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73308 kB' 'KernelStack: 6452 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.473 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.473 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.474 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.474 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:18.475 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:18.475 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:18.475 12:12:11 -- setup/hugepages.sh@100 -- # resv=0 00:15:18.475 nr_hugepages=1024 00:15:18.475 12:12:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:18.475 resv_hugepages=0 00:15:18.475 12:12:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:18.475 surplus_hugepages=0 00:15:18.475 12:12:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:18.475 anon_hugepages=0 00:15:18.475 12:12:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:18.475 12:12:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:18.475 12:12:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:18.475 12:12:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:18.475 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:18.475 12:12:11 -- setup/common.sh@18 -- # local node= 00:15:18.475 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:18.475 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:18.475 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:18.475 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:18.475 12:12:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:18.475 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:18.475 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7700384 kB' 'MemAvailable: 9505632 kB' 'Buffers: 2952 kB' 'Cached: 2017412 kB' 'SwapCached: 0 kB' 'Active: 846492 kB' 'Inactive: 1291892 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1596 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 48044 kB' 'Shmem: 10464 kB' 'KReclaimable: 64572 kB' 'Slab: 137880 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73308 kB' 'KernelStack: 6420 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.475 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.475 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.476 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.476 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:18.476 12:12:11 -- setup/common.sh@33 -- # echo 1024 00:15:18.476 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:18.476 12:12:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:18.476 12:12:11 -- setup/hugepages.sh@112 -- # get_nodes 00:15:18.476 12:12:11 -- setup/hugepages.sh@27 -- # local node 00:15:18.476 12:12:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:18.476 12:12:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:18.476 12:12:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:18.477 12:12:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:18.477 12:12:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:18.477 12:12:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:18.477 12:12:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:18.477 12:12:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:18.477 12:12:11 -- setup/common.sh@18 -- # local node=0 00:15:18.477 12:12:11 -- setup/common.sh@19 -- # local var val 00:15:18.477 12:12:11 -- setup/common.sh@20 -- # local mem_f mem 00:15:18.477 12:12:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:18.477 12:12:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:18.477 12:12:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:18.477 12:12:11 -- setup/common.sh@28 -- # mapfile -t mem 00:15:18.477 12:12:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7700384 kB' 'MemUsed: 4541596 kB' 'SwapCached: 0 kB' 'Active: 846580 kB' 'Inactive: 1291892 kB' 'Active(anon): 128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 718008 kB' 'Inactive(file): 1291892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1596 kB' 'Writeback: 0 kB' 'FilePages: 2020364 kB' 'Mapped: 48044 kB' 'AnonPages: 119684 kB' 'Shmem: 10464 kB' 'KernelStack: 6404 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64572 kB' 'Slab: 137880 kB' 'SReclaimable: 64572 kB' 'SUnreclaim: 73308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.477 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.477 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # continue 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # IFS=': ' 00:15:18.478 12:12:11 -- setup/common.sh@31 -- # read -r var val _ 00:15:18.478 12:12:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:18.478 12:12:11 -- setup/common.sh@33 -- # echo 0 00:15:18.478 12:12:11 -- setup/common.sh@33 -- # return 0 00:15:18.478 12:12:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:18.478 12:12:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:18.478 12:12:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:18.478 12:12:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:18.478 node0=1024 expecting 1024 00:15:18.478 12:12:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:18.478 12:12:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:18.478 00:15:18.478 real 0m1.039s 00:15:18.478 user 0m0.483s 00:15:18.478 sys 0m0.596s 00:15:18.478 12:12:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:18.478 12:12:11 -- common/autotest_common.sh@10 -- # set +x 00:15:18.478 ************************************ 00:15:18.478 END TEST no_shrink_alloc 00:15:18.478 ************************************ 00:15:18.478 12:12:11 -- setup/hugepages.sh@217 -- # clear_hp 00:15:18.478 12:12:11 -- setup/hugepages.sh@37 -- # local node hp 00:15:18.478 12:12:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:15:18.478 12:12:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:18.478 12:12:11 -- setup/hugepages.sh@41 -- # echo 0 00:15:18.478 12:12:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:18.478 12:12:11 -- setup/hugepages.sh@41 -- # echo 0 00:15:18.478 12:12:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:15:18.478 12:12:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:15:18.478 00:15:18.478 real 0m4.976s 00:15:18.478 user 0m2.316s 00:15:18.478 sys 0m2.686s 00:15:18.478 12:12:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:18.478 ************************************ 00:15:18.478 END TEST hugepages 00:15:18.478 ************************************ 00:15:18.478 12:12:11 -- common/autotest_common.sh@10 -- # set +x 00:15:18.478 12:12:11 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:15:18.478 12:12:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:18.478 12:12:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.478 12:12:11 -- common/autotest_common.sh@10 -- # set +x 00:15:18.738 ************************************ 00:15:18.738 START TEST driver 00:15:18.738 ************************************ 00:15:18.738 12:12:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:15:18.738 * Looking for test storage... 00:15:18.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:18.738 12:12:12 -- setup/driver.sh@68 -- # setup reset 00:15:18.738 12:12:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:18.738 12:12:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:19.304 12:12:12 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:15:19.304 12:12:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:19.304 12:12:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.304 12:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.304 ************************************ 00:15:19.304 START TEST guess_driver 00:15:19.304 ************************************ 00:15:19.304 12:12:12 -- common/autotest_common.sh@1111 -- # guess_driver 00:15:19.304 12:12:12 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:15:19.304 12:12:12 -- setup/driver.sh@47 -- # local fail=0 00:15:19.304 12:12:12 -- setup/driver.sh@49 -- # pick_driver 00:15:19.304 12:12:12 -- setup/driver.sh@36 -- # vfio 00:15:19.304 12:12:12 -- setup/driver.sh@21 -- # local iommu_grups 00:15:19.304 12:12:12 -- setup/driver.sh@22 -- # local unsafe_vfio 00:15:19.304 12:12:12 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:15:19.304 12:12:12 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:15:19.304 12:12:12 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:15:19.304 12:12:12 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:15:19.304 12:12:12 -- setup/driver.sh@32 -- # return 1 00:15:19.304 12:12:12 -- setup/driver.sh@38 -- # uio 00:15:19.304 12:12:12 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:15:19.304 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:15:19.304 12:12:12 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:15:19.304 Looking for driver=uio_pci_generic 00:15:19.304 12:12:12 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:15:19.304 12:12:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:19.304 12:12:12 -- setup/driver.sh@45 -- # setup output config 00:15:19.304 12:12:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:19.304 12:12:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:20.240 12:12:13 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:15:20.240 12:12:13 -- setup/driver.sh@58 -- # continue 00:15:20.240 12:12:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:20.240 12:12:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:20.240 12:12:13 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:20.240 12:12:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:20.240 12:12:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:20.240 12:12:13 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:20.240 12:12:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:20.240 12:12:13 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:15:20.240 12:12:13 -- setup/driver.sh@65 -- # setup reset 00:15:20.240 12:12:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:20.240 12:12:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:20.808 00:15:20.808 real 0m1.431s 00:15:20.808 user 0m0.562s 00:15:20.808 sys 0m0.874s 00:15:20.808 12:12:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:20.808 ************************************ 00:15:20.808 12:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:20.808 END TEST guess_driver 00:15:20.808 ************************************ 00:15:20.808 ************************************ 00:15:20.808 END TEST driver 00:15:20.808 ************************************ 00:15:20.808 00:15:20.808 real 0m2.187s 00:15:20.808 user 0m0.822s 00:15:20.808 sys 0m1.408s 00:15:20.808 12:12:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:20.808 12:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:20.808 12:12:14 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:20.808 12:12:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:20.808 12:12:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.808 12:12:14 -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 ************************************ 00:15:21.109 START TEST devices 00:15:21.109 ************************************ 00:15:21.109 12:12:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:21.109 * Looking for test storage... 00:15:21.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:21.109 12:12:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:15:21.109 12:12:14 -- setup/devices.sh@192 -- # setup reset 00:15:21.109 12:12:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:21.109 12:12:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:21.676 12:12:15 -- setup/devices.sh@194 -- # get_zoned_devs 00:15:21.676 12:12:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:21.676 12:12:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:21.676 12:12:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:21.676 12:12:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.676 12:12:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:21.676 12:12:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:21.676 12:12:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:21.676 12:12:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.676 12:12:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.676 12:12:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:15:21.676 12:12:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:15:21.676 12:12:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:21.676 12:12:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.676 12:12:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.677 12:12:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:15:21.677 12:12:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:15:21.677 12:12:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:21.677 12:12:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.677 12:12:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:21.677 12:12:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:21.677 12:12:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:21.677 12:12:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:21.677 12:12:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:21.677 12:12:15 -- setup/devices.sh@196 -- # blocks=() 00:15:21.677 12:12:15 -- setup/devices.sh@196 -- # declare -a blocks 00:15:21.677 12:12:15 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:15:21.677 12:12:15 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:15:21.677 12:12:15 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:15:21.677 12:12:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:21.677 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:15:21.677 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:21.677 12:12:15 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:21.677 12:12:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:21.677 12:12:15 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:15:21.677 12:12:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:21.677 12:12:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:21.935 No valid GPT data, bailing 00:15:21.935 12:12:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:21.935 12:12:15 -- scripts/common.sh@391 -- # pt= 00:15:21.935 12:12:15 -- scripts/common.sh@392 -- # return 1 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:15:21.935 12:12:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:21.935 12:12:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:21.935 12:12:15 -- setup/common.sh@80 -- # echo 4294967296 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:21.935 12:12:15 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:21.935 12:12:15 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:21.935 12:12:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:21.935 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:15:21.935 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:21.935 12:12:15 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:21.935 12:12:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:15:21.935 12:12:15 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:21.935 12:12:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:21.935 No valid GPT data, bailing 00:15:21.935 12:12:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:21.935 12:12:15 -- scripts/common.sh@391 -- # pt= 00:15:21.935 12:12:15 -- scripts/common.sh@392 -- # return 1 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:15:21.935 12:12:15 -- setup/common.sh@76 -- # local dev=nvme0n2 00:15:21.935 12:12:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:21.935 12:12:15 -- setup/common.sh@80 -- # echo 4294967296 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:21.935 12:12:15 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:21.935 12:12:15 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:21.935 12:12:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:21.935 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:15:21.935 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:21.935 12:12:15 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:21.935 12:12:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:15:21.935 12:12:15 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:21.935 12:12:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:21.935 No valid GPT data, bailing 00:15:21.935 12:12:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:21.935 12:12:15 -- scripts/common.sh@391 -- # pt= 00:15:21.935 12:12:15 -- scripts/common.sh@392 -- # return 1 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:15:21.935 12:12:15 -- setup/common.sh@76 -- # local dev=nvme0n3 00:15:21.935 12:12:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:21.935 12:12:15 -- setup/common.sh@80 -- # echo 4294967296 00:15:21.935 12:12:15 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:21.935 12:12:15 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:21.935 12:12:15 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:21.935 12:12:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:21.935 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:15:21.935 12:12:15 -- setup/devices.sh@201 -- # ctrl=nvme1 00:15:21.935 12:12:15 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:15:21.936 12:12:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:15:21.936 12:12:15 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:15:21.936 12:12:15 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:21.936 12:12:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:21.936 No valid GPT data, bailing 00:15:22.194 12:12:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:22.194 12:12:15 -- scripts/common.sh@391 -- # pt= 00:15:22.194 12:12:15 -- scripts/common.sh@392 -- # return 1 00:15:22.194 12:12:15 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:15:22.194 12:12:15 -- setup/common.sh@76 -- # local dev=nvme1n1 00:15:22.194 12:12:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:22.194 12:12:15 -- setup/common.sh@80 -- # echo 5368709120 00:15:22.194 12:12:15 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:15:22.194 12:12:15 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:22.194 12:12:15 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:15:22.194 12:12:15 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:15:22.194 12:12:15 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:15:22.194 12:12:15 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:15:22.194 12:12:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:22.194 12:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:22.194 12:12:15 -- common/autotest_common.sh@10 -- # set +x 00:15:22.194 ************************************ 00:15:22.194 START TEST nvme_mount 00:15:22.194 ************************************ 00:15:22.194 12:12:15 -- common/autotest_common.sh@1111 -- # nvme_mount 00:15:22.194 12:12:15 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:15:22.194 12:12:15 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:15:22.194 12:12:15 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:22.194 12:12:15 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:22.194 12:12:15 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:15:22.194 12:12:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:22.194 12:12:15 -- setup/common.sh@40 -- # local part_no=1 00:15:22.194 12:12:15 -- setup/common.sh@41 -- # local size=1073741824 00:15:22.194 12:12:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:22.194 12:12:15 -- setup/common.sh@44 -- # parts=() 00:15:22.194 12:12:15 -- setup/common.sh@44 -- # local parts 00:15:22.194 12:12:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:15:22.194 12:12:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:22.194 12:12:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:22.194 12:12:15 -- setup/common.sh@46 -- # (( part++ )) 00:15:22.194 12:12:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:22.194 12:12:15 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:22.194 12:12:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:22.194 12:12:15 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:15:23.131 Creating new GPT entries in memory. 00:15:23.131 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:23.131 other utilities. 00:15:23.131 12:12:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:15:23.131 12:12:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:23.131 12:12:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:23.131 12:12:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:23.131 12:12:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:24.507 Creating new GPT entries in memory. 00:15:24.507 The operation has completed successfully. 00:15:24.507 12:12:17 -- setup/common.sh@57 -- # (( part++ )) 00:15:24.507 12:12:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:24.507 12:12:17 -- setup/common.sh@62 -- # wait 56486 00:15:24.507 12:12:17 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.507 12:12:17 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:15:24.507 12:12:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.507 12:12:17 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:15:24.507 12:12:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:15:24.507 12:12:17 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.507 12:12:17 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:24.507 12:12:17 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:24.507 12:12:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:15:24.507 12:12:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.507 12:12:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:24.507 12:12:17 -- setup/devices.sh@53 -- # local found=0 00:15:24.507 12:12:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:24.507 12:12:17 -- setup/devices.sh@56 -- # : 00:15:24.507 12:12:17 -- setup/devices.sh@59 -- # local pci status 00:15:24.507 12:12:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:24.507 12:12:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:24.507 12:12:17 -- setup/devices.sh@47 -- # setup output config 00:15:24.507 12:12:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:24.507 12:12:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:24.507 12:12:17 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:24.507 12:12:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:15:24.507 12:12:17 -- setup/devices.sh@63 -- # found=1 00:15:24.507 12:12:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:24.507 12:12:17 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:24.507 12:12:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:24.767 12:12:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:24.767 12:12:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:24.767 12:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:24.767 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:24.767 12:12:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:24.767 12:12:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:24.767 12:12:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.767 12:12:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:24.767 12:12:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:24.767 12:12:18 -- setup/devices.sh@110 -- # cleanup_nvme 00:15:24.767 12:12:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.767 12:12:18 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:24.767 12:12:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:24.767 12:12:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:24.767 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:24.767 12:12:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:24.767 12:12:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:25.027 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:25.027 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:25.027 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:25.027 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:25.027 12:12:18 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:15:25.027 12:12:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:15:25.027 12:12:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:25.027 12:12:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:15:25.027 12:12:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:15:25.027 12:12:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:25.027 12:12:18 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:25.027 12:12:18 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:25.027 12:12:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:15:25.027 12:12:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:25.027 12:12:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:25.027 12:12:18 -- setup/devices.sh@53 -- # local found=0 00:15:25.027 12:12:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:25.027 12:12:18 -- setup/devices.sh@56 -- # : 00:15:25.027 12:12:18 -- setup/devices.sh@59 -- # local pci status 00:15:25.027 12:12:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:25.027 12:12:18 -- setup/devices.sh@47 -- # setup output config 00:15:25.027 12:12:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:25.027 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.027 12:12:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:25.286 12:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.286 12:12:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:15:25.286 12:12:18 -- setup/devices.sh@63 -- # found=1 00:15:25.286 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.286 12:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.286 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.545 12:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.545 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.545 12:12:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.545 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.545 12:12:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:25.545 12:12:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:25.545 12:12:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:25.545 12:12:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:25.545 12:12:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:25.545 12:12:18 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:25.545 12:12:18 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:15:25.545 12:12:18 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:25.545 12:12:18 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:15:25.545 12:12:18 -- setup/devices.sh@50 -- # local mount_point= 00:15:25.545 12:12:18 -- setup/devices.sh@51 -- # local test_file= 00:15:25.545 12:12:18 -- setup/devices.sh@53 -- # local found=0 00:15:25.545 12:12:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:25.545 12:12:18 -- setup/devices.sh@59 -- # local pci status 00:15:25.545 12:12:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.545 12:12:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:25.545 12:12:18 -- setup/devices.sh@47 -- # setup output config 00:15:25.545 12:12:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:25.545 12:12:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:25.804 12:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.804 12:12:19 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:15:25.804 12:12:19 -- setup/devices.sh@63 -- # found=1 00:15:25.804 12:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.804 12:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.804 12:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.063 12:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.063 12:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.063 12:12:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.063 12:12:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.063 12:12:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:26.063 12:12:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:26.063 12:12:19 -- setup/devices.sh@68 -- # return 0 00:15:26.063 12:12:19 -- setup/devices.sh@128 -- # cleanup_nvme 00:15:26.063 12:12:19 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:26.063 12:12:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:26.063 12:12:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:26.063 12:12:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:26.063 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:26.063 00:15:26.063 real 0m4.017s 00:15:26.063 user 0m0.712s 00:15:26.063 sys 0m1.021s 00:15:26.063 12:12:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.063 ************************************ 00:15:26.063 END TEST nvme_mount 00:15:26.063 ************************************ 00:15:26.063 12:12:19 -- common/autotest_common.sh@10 -- # set +x 00:15:26.321 12:12:19 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:15:26.321 12:12:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:26.321 12:12:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.321 12:12:19 -- common/autotest_common.sh@10 -- # set +x 00:15:26.321 ************************************ 00:15:26.321 START TEST dm_mount 00:15:26.321 ************************************ 00:15:26.321 12:12:19 -- common/autotest_common.sh@1111 -- # dm_mount 00:15:26.321 12:12:19 -- setup/devices.sh@144 -- # pv=nvme0n1 00:15:26.321 12:12:19 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:15:26.321 12:12:19 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:15:26.321 12:12:19 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:15:26.321 12:12:19 -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:26.321 12:12:19 -- setup/common.sh@40 -- # local part_no=2 00:15:26.321 12:12:19 -- setup/common.sh@41 -- # local size=1073741824 00:15:26.321 12:12:19 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:26.321 12:12:19 -- setup/common.sh@44 -- # parts=() 00:15:26.321 12:12:19 -- setup/common.sh@44 -- # local parts 00:15:26.321 12:12:19 -- setup/common.sh@46 -- # (( part = 1 )) 00:15:26.321 12:12:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:26.321 12:12:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:26.321 12:12:19 -- setup/common.sh@46 -- # (( part++ )) 00:15:26.321 12:12:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:26.321 12:12:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:26.321 12:12:19 -- setup/common.sh@46 -- # (( part++ )) 00:15:26.321 12:12:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:26.321 12:12:19 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:26.321 12:12:19 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:26.321 12:12:19 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:15:27.256 Creating new GPT entries in memory. 00:15:27.256 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:27.256 other utilities. 00:15:27.256 12:12:20 -- setup/common.sh@57 -- # (( part = 1 )) 00:15:27.256 12:12:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:27.256 12:12:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:27.256 12:12:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:27.256 12:12:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:28.632 Creating new GPT entries in memory. 00:15:28.632 The operation has completed successfully. 00:15:28.632 12:12:21 -- setup/common.sh@57 -- # (( part++ )) 00:15:28.632 12:12:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:28.632 12:12:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:28.632 12:12:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:28.632 12:12:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:15:29.568 The operation has completed successfully. 00:15:29.568 12:12:22 -- setup/common.sh@57 -- # (( part++ )) 00:15:29.568 12:12:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:29.568 12:12:22 -- setup/common.sh@62 -- # wait 56929 00:15:29.568 12:12:22 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:15:29.568 12:12:22 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:29.568 12:12:22 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:29.568 12:12:22 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:15:29.568 12:12:22 -- setup/devices.sh@160 -- # for t in {1..5} 00:15:29.568 12:12:22 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:29.568 12:12:22 -- setup/devices.sh@161 -- # break 00:15:29.568 12:12:22 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:29.568 12:12:22 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:15:29.568 12:12:22 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:15:29.568 12:12:22 -- setup/devices.sh@166 -- # dm=dm-0 00:15:29.568 12:12:22 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:15:29.568 12:12:22 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:15:29.568 12:12:22 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:29.568 12:12:22 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:15:29.568 12:12:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:29.568 12:12:22 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:29.568 12:12:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:15:29.568 12:12:22 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:29.568 12:12:22 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:29.568 12:12:22 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:29.568 12:12:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:15:29.568 12:12:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:29.568 12:12:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:29.568 12:12:22 -- setup/devices.sh@53 -- # local found=0 00:15:29.568 12:12:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:15:29.568 12:12:22 -- setup/devices.sh@56 -- # : 00:15:29.568 12:12:22 -- setup/devices.sh@59 -- # local pci status 00:15:29.568 12:12:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:29.568 12:12:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:29.568 12:12:22 -- setup/devices.sh@47 -- # setup output config 00:15:29.568 12:12:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:29.568 12:12:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:29.827 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.827 12:12:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:15:29.827 12:12:23 -- setup/devices.sh@63 -- # found=1 00:15:29.827 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:29.827 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.827 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:29.827 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.827 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:29.827 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.827 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:30.085 12:12:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:30.085 12:12:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:15:30.085 12:12:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:30.085 12:12:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:15:30.085 12:12:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:30.085 12:12:23 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:30.085 12:12:23 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:15:30.085 12:12:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:30.085 12:12:23 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:15:30.085 12:12:23 -- setup/devices.sh@50 -- # local mount_point= 00:15:30.085 12:12:23 -- setup/devices.sh@51 -- # local test_file= 00:15:30.085 12:12:23 -- setup/devices.sh@53 -- # local found=0 00:15:30.085 12:12:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:30.085 12:12:23 -- setup/devices.sh@59 -- # local pci status 00:15:30.085 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:30.085 12:12:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:30.085 12:12:23 -- setup/devices.sh@47 -- # setup output config 00:15:30.085 12:12:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:30.085 12:12:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:30.085 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:30.085 12:12:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:15:30.085 12:12:23 -- setup/devices.sh@63 -- # found=1 00:15:30.085 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:30.085 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:30.085 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:30.343 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:30.343 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:30.343 12:12:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:30.343 12:12:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:30.343 12:12:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:30.343 12:12:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:30.343 12:12:23 -- setup/devices.sh@68 -- # return 0 00:15:30.343 12:12:23 -- setup/devices.sh@187 -- # cleanup_dm 00:15:30.343 12:12:23 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:30.343 12:12:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:15:30.343 12:12:23 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:15:30.602 12:12:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:30.602 12:12:23 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:15:30.602 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:30.602 12:12:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:15:30.602 12:12:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:15:30.602 00:15:30.602 real 0m4.249s 00:15:30.602 user 0m0.436s 00:15:30.602 sys 0m0.695s 00:15:30.602 12:12:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.602 12:12:23 -- common/autotest_common.sh@10 -- # set +x 00:15:30.602 ************************************ 00:15:30.602 END TEST dm_mount 00:15:30.602 ************************************ 00:15:30.602 12:12:23 -- setup/devices.sh@1 -- # cleanup 00:15:30.602 12:12:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:15:30.602 12:12:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:30.602 12:12:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:30.602 12:12:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:30.602 12:12:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:30.602 12:12:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:30.861 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:30.861 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:30.861 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:30.861 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:30.861 12:12:24 -- setup/devices.sh@12 -- # cleanup_dm 00:15:30.861 12:12:24 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:30.861 12:12:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:15:30.861 12:12:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:30.861 12:12:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:15:30.861 12:12:24 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:15:30.861 12:12:24 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:15:30.861 ************************************ 00:15:30.861 END TEST devices 00:15:30.861 ************************************ 00:15:30.861 00:15:30.861 real 0m9.928s 00:15:30.861 user 0m1.832s 00:15:30.861 sys 0m2.380s 00:15:30.861 12:12:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.861 12:12:24 -- common/autotest_common.sh@10 -- # set +x 00:15:30.861 00:15:30.861 real 0m22.450s 00:15:30.861 user 0m7.280s 00:15:30.861 sys 0m9.388s 00:15:30.861 12:12:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.861 12:12:24 -- common/autotest_common.sh@10 -- # set +x 00:15:30.861 ************************************ 00:15:30.861 END TEST setup.sh 00:15:30.861 ************************************ 00:15:30.861 12:12:24 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:31.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:31.428 Hugepages 00:15:31.428 node hugesize free / total 00:15:31.428 node0 1048576kB 0 / 0 00:15:31.688 node0 2048kB 2048 / 2048 00:15:31.688 00:15:31.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:31.688 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:31.688 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:15:31.688 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:15:31.688 12:12:25 -- spdk/autotest.sh@130 -- # uname -s 00:15:31.688 12:12:25 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:15:31.688 12:12:25 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:15:31.688 12:12:25 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:32.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.623 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.623 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.623 12:12:25 -- common/autotest_common.sh@1518 -- # sleep 1 00:15:33.559 12:12:26 -- common/autotest_common.sh@1519 -- # bdfs=() 00:15:33.559 12:12:26 -- common/autotest_common.sh@1519 -- # local bdfs 00:15:33.559 12:12:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:15:33.559 12:12:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:15:33.559 12:12:26 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:33.559 12:12:26 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:33.559 12:12:26 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:33.559 12:12:26 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:33.559 12:12:26 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:33.817 12:12:27 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:15:33.817 12:12:27 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:33.817 12:12:27 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:34.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:34.076 Waiting for block devices as requested 00:15:34.076 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:34.076 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:34.335 12:12:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:34.335 12:12:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:15:34.335 12:12:27 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:15:34.335 12:12:27 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:15:34.336 12:12:27 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:34.336 12:12:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:34.336 12:12:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:34.336 12:12:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1543 -- # continue 00:15:34.336 12:12:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:34.336 12:12:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:15:34.336 12:12:27 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:15:34.336 12:12:27 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:34.336 12:12:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:34.336 12:12:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:34.336 12:12:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:34.336 12:12:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:34.336 12:12:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:34.336 12:12:27 -- common/autotest_common.sh@1543 -- # continue 00:15:34.336 12:12:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:15:34.336 12:12:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:34.336 12:12:27 -- common/autotest_common.sh@10 -- # set +x 00:15:34.336 12:12:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:15:34.336 12:12:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:34.336 12:12:27 -- common/autotest_common.sh@10 -- # set +x 00:15:34.336 12:12:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:34.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:35.180 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:35.180 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:35.180 12:12:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:15:35.180 12:12:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:35.180 12:12:28 -- common/autotest_common.sh@10 -- # set +x 00:15:35.180 12:12:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:15:35.180 12:12:28 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:15:35.180 12:12:28 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:15:35.180 12:12:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:15:35.180 12:12:28 -- common/autotest_common.sh@1563 -- # local bdfs 00:15:35.180 12:12:28 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:15:35.180 12:12:28 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:35.180 12:12:28 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:35.180 12:12:28 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:35.180 12:12:28 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:35.180 12:12:28 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:35.443 12:12:28 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:15:35.443 12:12:28 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:35.443 12:12:28 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:35.443 12:12:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:15:35.443 12:12:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:35.443 12:12:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:35.443 12:12:28 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:35.443 12:12:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:15:35.443 12:12:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:35.443 12:12:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:35.443 12:12:28 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:15:35.443 12:12:28 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:15:35.443 12:12:28 -- common/autotest_common.sh@1579 -- # return 0 00:15:35.443 12:12:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:15:35.443 12:12:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:15:35.443 12:12:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:35.443 12:12:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:35.443 12:12:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:15:35.443 12:12:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:35.443 12:12:28 -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 12:12:28 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:35.443 12:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:35.443 12:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.443 12:12:28 -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 ************************************ 00:15:35.443 START TEST env 00:15:35.443 ************************************ 00:15:35.443 12:12:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:35.443 * Looking for test storage... 00:15:35.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:15:35.443 12:12:28 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:35.443 12:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:35.443 12:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.443 12:12:28 -- common/autotest_common.sh@10 -- # set +x 00:15:35.701 ************************************ 00:15:35.701 START TEST env_memory 00:15:35.701 ************************************ 00:15:35.701 12:12:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:35.701 00:15:35.701 00:15:35.701 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.701 http://cunit.sourceforge.net/ 00:15:35.701 00:15:35.701 00:15:35.701 Suite: memory 00:15:35.701 Test: alloc and free memory map ...[2024-04-26 12:12:28.971776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:35.701 passed 00:15:35.701 Test: mem map translation ...[2024-04-26 12:12:29.002549] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:35.701 [2024-04-26 12:12:29.002592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:35.701 [2024-04-26 12:12:29.002648] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:35.701 [2024-04-26 12:12:29.002658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:35.701 passed 00:15:35.701 Test: mem map registration ...[2024-04-26 12:12:29.066357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:15:35.701 [2024-04-26 12:12:29.066411] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:15:35.701 passed 00:15:35.701 Test: mem map adjacent registrations ...passed 00:15:35.701 00:15:35.701 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.701 suites 1 1 n/a 0 0 00:15:35.701 tests 4 4 4 0 0 00:15:35.701 asserts 152 152 152 0 n/a 00:15:35.701 00:15:35.701 Elapsed time = 0.213 seconds 00:15:35.701 00:15:35.701 real 0m0.225s 00:15:35.701 user 0m0.212s 00:15:35.701 sys 0m0.011s 00:15:35.701 12:12:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.701 12:12:29 -- common/autotest_common.sh@10 -- # set +x 00:15:35.701 ************************************ 00:15:35.701 END TEST env_memory 00:15:35.701 ************************************ 00:15:35.960 12:12:29 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:35.960 12:12:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:35.960 12:12:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.960 12:12:29 -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 ************************************ 00:15:35.960 START TEST env_vtophys 00:15:35.960 ************************************ 00:15:35.960 12:12:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:35.960 EAL: lib.eal log level changed from notice to debug 00:15:35.960 EAL: Detected lcore 0 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 1 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 2 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 3 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 4 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 5 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 6 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 7 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 8 as core 0 on socket 0 00:15:35.960 EAL: Detected lcore 9 as core 0 on socket 0 00:15:35.960 EAL: Maximum logical cores by configuration: 128 00:15:35.960 EAL: Detected CPU lcores: 10 00:15:35.960 EAL: Detected NUMA nodes: 1 00:15:35.960 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:15:35.960 EAL: Detected shared linkage of DPDK 00:15:35.960 EAL: No shared files mode enabled, IPC will be disabled 00:15:35.960 EAL: Selected IOVA mode 'PA' 00:15:35.960 EAL: Probing VFIO support... 00:15:35.960 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:35.960 EAL: VFIO modules not loaded, skipping VFIO support... 00:15:35.960 EAL: Ask a virtual area of 0x2e000 bytes 00:15:35.960 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:15:35.960 EAL: Setting up physically contiguous memory... 00:15:35.960 EAL: Setting maximum number of open files to 524288 00:15:35.960 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:15:35.960 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:15:35.960 EAL: Ask a virtual area of 0x61000 bytes 00:15:35.960 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:15:35.960 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:35.960 EAL: Ask a virtual area of 0x400000000 bytes 00:15:35.960 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:15:35.960 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:15:35.960 EAL: Ask a virtual area of 0x61000 bytes 00:15:35.960 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:15:35.960 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:35.960 EAL: Ask a virtual area of 0x400000000 bytes 00:15:35.960 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:15:35.960 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:15:35.960 EAL: Ask a virtual area of 0x61000 bytes 00:15:35.960 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:15:35.960 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:35.960 EAL: Ask a virtual area of 0x400000000 bytes 00:15:35.960 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:15:35.960 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:15:35.960 EAL: Ask a virtual area of 0x61000 bytes 00:15:35.960 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:15:35.960 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:35.960 EAL: Ask a virtual area of 0x400000000 bytes 00:15:35.960 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:15:35.960 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:15:35.960 EAL: Hugepages will be freed exactly as allocated. 00:15:35.960 EAL: No shared files mode enabled, IPC is disabled 00:15:35.960 EAL: No shared files mode enabled, IPC is disabled 00:15:35.960 EAL: TSC frequency is ~2200000 KHz 00:15:35.960 EAL: Main lcore 0 is ready (tid=7f0735630a00;cpuset=[0]) 00:15:35.960 EAL: Trying to obtain current memory policy. 00:15:35.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:35.960 EAL: Restoring previous memory policy: 0 00:15:35.960 EAL: request: mp_malloc_sync 00:15:35.960 EAL: No shared files mode enabled, IPC is disabled 00:15:35.960 EAL: Heap on socket 0 was expanded by 2MB 00:15:35.960 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:35.960 EAL: No PCI address specified using 'addr=' in: bus=pci 00:15:35.960 EAL: Mem event callback 'spdk:(nil)' registered 00:15:35.960 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:15:36.219 00:15:36.219 00:15:36.219 CUnit - A unit testing framework for C - Version 2.1-3 00:15:36.219 http://cunit.sourceforge.net/ 00:15:36.219 00:15:36.219 00:15:36.219 Suite: components_suite 00:15:36.219 Test: vtophys_malloc_test ...passed 00:15:36.219 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 4MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 4MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 6MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 6MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 10MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 10MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 18MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 18MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 34MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 34MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 66MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 66MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 130MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was shrunk by 130MB 00:15:36.219 EAL: Trying to obtain current memory policy. 00:15:36.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.219 EAL: Restoring previous memory policy: 4 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.219 EAL: request: mp_malloc_sync 00:15:36.219 EAL: No shared files mode enabled, IPC is disabled 00:15:36.219 EAL: Heap on socket 0 was expanded by 258MB 00:15:36.219 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.478 EAL: request: mp_malloc_sync 00:15:36.478 EAL: No shared files mode enabled, IPC is disabled 00:15:36.478 EAL: Heap on socket 0 was shrunk by 258MB 00:15:36.478 EAL: Trying to obtain current memory policy. 00:15:36.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:36.478 EAL: Restoring previous memory policy: 4 00:15:36.478 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.478 EAL: request: mp_malloc_sync 00:15:36.478 EAL: No shared files mode enabled, IPC is disabled 00:15:36.478 EAL: Heap on socket 0 was expanded by 514MB 00:15:36.737 EAL: Calling mem event callback 'spdk:(nil)' 00:15:36.737 EAL: request: mp_malloc_sync 00:15:36.737 EAL: No shared files mode enabled, IPC is disabled 00:15:36.737 EAL: Heap on socket 0 was shrunk by 514MB 00:15:36.737 EAL: Trying to obtain current memory policy. 00:15:36.737 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:37.302 EAL: Restoring previous memory policy: 4 00:15:37.302 EAL: Calling mem event callback 'spdk:(nil)' 00:15:37.302 EAL: request: mp_malloc_sync 00:15:37.302 EAL: No shared files mode enabled, IPC is disabled 00:15:37.302 EAL: Heap on socket 0 was expanded by 1026MB 00:15:37.560 EAL: Calling mem event callback 'spdk:(nil)' 00:15:37.819 passed 00:15:37.819 00:15:37.819 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.819 suites 1 1 n/a 0 0 00:15:37.819 tests 2 2 2 0 0 00:15:37.819 asserts 5239 5239 5239 0 n/a 00:15:37.819 00:15:37.819 Elapsed time = 1.667 seconds 00:15:37.819 EAL: request: mp_malloc_sync 00:15:37.819 EAL: No shared files mode enabled, IPC is disabled 00:15:37.819 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:37.819 EAL: Calling mem event callback 'spdk:(nil)' 00:15:37.819 EAL: request: mp_malloc_sync 00:15:37.819 EAL: No shared files mode enabled, IPC is disabled 00:15:37.819 EAL: Heap on socket 0 was shrunk by 2MB 00:15:37.819 EAL: No shared files mode enabled, IPC is disabled 00:15:37.819 EAL: No shared files mode enabled, IPC is disabled 00:15:37.819 EAL: No shared files mode enabled, IPC is disabled 00:15:37.819 00:15:37.819 real 0m1.866s 00:15:37.819 user 0m0.868s 00:15:37.819 sys 0m0.860s 00:15:37.819 12:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:37.819 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:37.819 ************************************ 00:15:37.819 END TEST env_vtophys 00:15:37.819 ************************************ 00:15:37.819 12:12:31 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:37.819 12:12:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:37.819 12:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.819 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:37.819 ************************************ 00:15:37.819 START TEST env_pci 00:15:37.819 ************************************ 00:15:37.819 12:12:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:37.819 00:15:37.819 00:15:37.819 CUnit - A unit testing framework for C - Version 2.1-3 00:15:37.819 http://cunit.sourceforge.net/ 00:15:37.819 00:15:37.819 00:15:37.819 Suite: pci 00:15:37.819 Test: pci_hook ...[2024-04-26 12:12:31.260716] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58141 has claimed it 00:15:37.819 passed 00:15:37.819 00:15:37.819 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.819 suites 1 1 n/a 0 0 00:15:37.819 tests 1 1 1 0 0 00:15:37.819 asserts 25 25 25 0 n/a 00:15:37.819 00:15:37.819 Elapsed time = 0.002 seconds 00:15:37.819 EAL: Cannot find device (10000:00:01.0) 00:15:37.819 EAL: Failed to attach device on primary process 00:15:37.819 00:15:37.819 real 0m0.021s 00:15:37.819 user 0m0.010s 00:15:37.819 sys 0m0.011s 00:15:37.819 12:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:37.819 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:37.819 ************************************ 00:15:37.819 END TEST env_pci 00:15:37.819 ************************************ 00:15:38.079 12:12:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:38.079 12:12:31 -- env/env.sh@15 -- # uname 00:15:38.079 12:12:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:38.079 12:12:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:38.079 12:12:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:38.079 12:12:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:38.079 12:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.079 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.079 ************************************ 00:15:38.079 START TEST env_dpdk_post_init 00:15:38.079 ************************************ 00:15:38.079 12:12:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:38.079 EAL: Detected CPU lcores: 10 00:15:38.079 EAL: Detected NUMA nodes: 1 00:15:38.079 EAL: Detected shared linkage of DPDK 00:15:38.079 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:38.079 EAL: Selected IOVA mode 'PA' 00:15:38.079 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:38.079 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:38.079 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:38.338 Starting DPDK initialization... 00:15:38.338 Starting SPDK post initialization... 00:15:38.338 SPDK NVMe probe 00:15:38.338 Attaching to 0000:00:10.0 00:15:38.338 Attaching to 0000:00:11.0 00:15:38.338 Attached to 0000:00:10.0 00:15:38.338 Attached to 0000:00:11.0 00:15:38.338 Cleaning up... 00:15:38.338 00:15:38.338 real 0m0.175s 00:15:38.338 user 0m0.045s 00:15:38.338 sys 0m0.030s 00:15:38.338 12:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.338 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.338 ************************************ 00:15:38.338 END TEST env_dpdk_post_init 00:15:38.338 ************************************ 00:15:38.338 12:12:31 -- env/env.sh@26 -- # uname 00:15:38.338 12:12:31 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:38.338 12:12:31 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:38.338 12:12:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:38.338 12:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.338 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.338 ************************************ 00:15:38.338 START TEST env_mem_callbacks 00:15:38.338 ************************************ 00:15:38.338 12:12:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:38.338 EAL: Detected CPU lcores: 10 00:15:38.338 EAL: Detected NUMA nodes: 1 00:15:38.338 EAL: Detected shared linkage of DPDK 00:15:38.338 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:38.338 EAL: Selected IOVA mode 'PA' 00:15:38.338 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:38.338 00:15:38.338 00:15:38.338 CUnit - A unit testing framework for C - Version 2.1-3 00:15:38.338 http://cunit.sourceforge.net/ 00:15:38.338 00:15:38.338 00:15:38.338 Suite: memory 00:15:38.338 Test: test ... 00:15:38.338 register 0x200000200000 2097152 00:15:38.338 malloc 3145728 00:15:38.338 register 0x200000400000 4194304 00:15:38.338 buf 0x200000500000 len 3145728 PASSED 00:15:38.338 malloc 64 00:15:38.338 buf 0x2000004fff40 len 64 PASSED 00:15:38.338 malloc 4194304 00:15:38.338 register 0x200000800000 6291456 00:15:38.338 buf 0x200000a00000 len 4194304 PASSED 00:15:38.338 free 0x200000500000 3145728 00:15:38.338 free 0x2000004fff40 64 00:15:38.338 unregister 0x200000400000 4194304 PASSED 00:15:38.338 free 0x200000a00000 4194304 00:15:38.338 unregister 0x200000800000 6291456 PASSED 00:15:38.338 malloc 8388608 00:15:38.338 register 0x200000400000 10485760 00:15:38.338 buf 0x200000600000 len 8388608 PASSED 00:15:38.338 free 0x200000600000 8388608 00:15:38.338 unregister 0x200000400000 10485760 PASSED 00:15:38.338 passed 00:15:38.338 00:15:38.338 Run Summary: Type Total Ran Passed Failed Inactive 00:15:38.338 suites 1 1 n/a 0 0 00:15:38.338 tests 1 1 1 0 0 00:15:38.338 asserts 15 15 15 0 n/a 00:15:38.338 00:15:38.338 Elapsed time = 0.010 seconds 00:15:38.599 ************************************ 00:15:38.599 END TEST env_mem_callbacks 00:15:38.599 ************************************ 00:15:38.599 00:15:38.599 real 0m0.140s 00:15:38.599 user 0m0.016s 00:15:38.599 sys 0m0.021s 00:15:38.599 12:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.599 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.599 ************************************ 00:15:38.599 END TEST env 00:15:38.599 ************************************ 00:15:38.599 00:15:38.599 real 0m3.067s 00:15:38.599 user 0m1.365s 00:15:38.599 sys 0m1.283s 00:15:38.599 12:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.599 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.599 12:12:31 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:38.599 12:12:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:38.599 12:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.599 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.599 ************************************ 00:15:38.599 START TEST rpc 00:15:38.599 ************************************ 00:15:38.599 12:12:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:38.599 * Looking for test storage... 00:15:38.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:38.599 12:12:32 -- rpc/rpc.sh@65 -- # spdk_pid=58271 00:15:38.599 12:12:32 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:38.599 12:12:32 -- rpc/rpc.sh@67 -- # waitforlisten 58271 00:15:38.599 12:12:32 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:38.599 12:12:32 -- common/autotest_common.sh@817 -- # '[' -z 58271 ']' 00:15:38.599 12:12:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.599 12:12:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.599 12:12:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.599 12:12:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.599 12:12:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.858 [2024-04-26 12:12:32.092714] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:15:38.858 [2024-04-26 12:12:32.092802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58271 ] 00:15:38.858 [2024-04-26 12:12:32.230599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.117 [2024-04-26 12:12:32.349568] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:39.117 [2024-04-26 12:12:32.349879] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58271' to capture a snapshot of events at runtime. 00:15:39.117 [2024-04-26 12:12:32.350058] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.117 [2024-04-26 12:12:32.350238] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.117 [2024-04-26 12:12:32.350286] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58271 for offline analysis/debug. 00:15:39.117 [2024-04-26 12:12:32.350476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.682 12:12:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:39.682 12:12:33 -- common/autotest_common.sh@850 -- # return 0 00:15:39.682 12:12:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:39.682 12:12:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:39.682 12:12:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:39.682 12:12:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:39.682 12:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:39.682 12:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.682 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.682 ************************************ 00:15:39.682 START TEST rpc_integrity 00:15:39.682 ************************************ 00:15:39.682 12:12:33 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:15:39.682 12:12:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.682 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.682 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.682 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.682 12:12:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:39.682 12:12:33 -- rpc/rpc.sh@13 -- # jq length 00:15:39.940 12:12:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:39.940 12:12:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:39.940 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.940 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.940 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.940 12:12:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:39.940 12:12:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:39.940 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.940 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.940 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.940 12:12:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:39.940 { 00:15:39.940 "name": "Malloc0", 00:15:39.940 "aliases": [ 00:15:39.940 "020de2f5-6bf3-4a40-b99c-9f91d66da594" 00:15:39.940 ], 00:15:39.940 "product_name": "Malloc disk", 00:15:39.940 "block_size": 512, 00:15:39.940 "num_blocks": 16384, 00:15:39.940 "uuid": "020de2f5-6bf3-4a40-b99c-9f91d66da594", 00:15:39.940 "assigned_rate_limits": { 00:15:39.940 "rw_ios_per_sec": 0, 00:15:39.940 "rw_mbytes_per_sec": 0, 00:15:39.940 "r_mbytes_per_sec": 0, 00:15:39.940 "w_mbytes_per_sec": 0 00:15:39.940 }, 00:15:39.941 "claimed": false, 00:15:39.941 "zoned": false, 00:15:39.941 "supported_io_types": { 00:15:39.941 "read": true, 00:15:39.941 "write": true, 00:15:39.941 "unmap": true, 00:15:39.941 "write_zeroes": true, 00:15:39.941 "flush": true, 00:15:39.941 "reset": true, 00:15:39.941 "compare": false, 00:15:39.941 "compare_and_write": false, 00:15:39.941 "abort": true, 00:15:39.941 "nvme_admin": false, 00:15:39.941 "nvme_io": false 00:15:39.941 }, 00:15:39.941 "memory_domains": [ 00:15:39.941 { 00:15:39.941 "dma_device_id": "system", 00:15:39.941 "dma_device_type": 1 00:15:39.941 }, 00:15:39.941 { 00:15:39.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.941 "dma_device_type": 2 00:15:39.941 } 00:15:39.941 ], 00:15:39.941 "driver_specific": {} 00:15:39.941 } 00:15:39.941 ]' 00:15:39.941 12:12:33 -- rpc/rpc.sh@17 -- # jq length 00:15:39.941 12:12:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:39.941 12:12:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:39.941 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.941 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.941 [2024-04-26 12:12:33.257992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:39.941 [2024-04-26 12:12:33.258058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.941 [2024-04-26 12:12:33.258080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1800370 00:15:39.941 [2024-04-26 12:12:33.258090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.941 [2024-04-26 12:12:33.259813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.941 [2024-04-26 12:12:33.259851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:39.941 Passthru0 00:15:39.941 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.941 12:12:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:39.941 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.941 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.941 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.941 12:12:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:39.941 { 00:15:39.941 "name": "Malloc0", 00:15:39.941 "aliases": [ 00:15:39.941 "020de2f5-6bf3-4a40-b99c-9f91d66da594" 00:15:39.941 ], 00:15:39.941 "product_name": "Malloc disk", 00:15:39.941 "block_size": 512, 00:15:39.941 "num_blocks": 16384, 00:15:39.941 "uuid": "020de2f5-6bf3-4a40-b99c-9f91d66da594", 00:15:39.941 "assigned_rate_limits": { 00:15:39.941 "rw_ios_per_sec": 0, 00:15:39.941 "rw_mbytes_per_sec": 0, 00:15:39.941 "r_mbytes_per_sec": 0, 00:15:39.941 "w_mbytes_per_sec": 0 00:15:39.941 }, 00:15:39.941 "claimed": true, 00:15:39.941 "claim_type": "exclusive_write", 00:15:39.941 "zoned": false, 00:15:39.941 "supported_io_types": { 00:15:39.941 "read": true, 00:15:39.941 "write": true, 00:15:39.941 "unmap": true, 00:15:39.941 "write_zeroes": true, 00:15:39.941 "flush": true, 00:15:39.941 "reset": true, 00:15:39.941 "compare": false, 00:15:39.941 "compare_and_write": false, 00:15:39.941 "abort": true, 00:15:39.941 "nvme_admin": false, 00:15:39.941 "nvme_io": false 00:15:39.941 }, 00:15:39.941 "memory_domains": [ 00:15:39.941 { 00:15:39.941 "dma_device_id": "system", 00:15:39.941 "dma_device_type": 1 00:15:39.941 }, 00:15:39.941 { 00:15:39.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.941 "dma_device_type": 2 00:15:39.941 } 00:15:39.941 ], 00:15:39.941 "driver_specific": {} 00:15:39.941 }, 00:15:39.941 { 00:15:39.941 "name": "Passthru0", 00:15:39.941 "aliases": [ 00:15:39.941 "2c1892b7-408f-59e3-8fbd-a53a3f348349" 00:15:39.941 ], 00:15:39.941 "product_name": "passthru", 00:15:39.941 "block_size": 512, 00:15:39.941 "num_blocks": 16384, 00:15:39.941 "uuid": "2c1892b7-408f-59e3-8fbd-a53a3f348349", 00:15:39.941 "assigned_rate_limits": { 00:15:39.941 "rw_ios_per_sec": 0, 00:15:39.941 "rw_mbytes_per_sec": 0, 00:15:39.941 "r_mbytes_per_sec": 0, 00:15:39.941 "w_mbytes_per_sec": 0 00:15:39.941 }, 00:15:39.941 "claimed": false, 00:15:39.941 "zoned": false, 00:15:39.941 "supported_io_types": { 00:15:39.941 "read": true, 00:15:39.941 "write": true, 00:15:39.941 "unmap": true, 00:15:39.941 "write_zeroes": true, 00:15:39.941 "flush": true, 00:15:39.941 "reset": true, 00:15:39.941 "compare": false, 00:15:39.941 "compare_and_write": false, 00:15:39.941 "abort": true, 00:15:39.941 "nvme_admin": false, 00:15:39.941 "nvme_io": false 00:15:39.941 }, 00:15:39.941 "memory_domains": [ 00:15:39.941 { 00:15:39.941 "dma_device_id": "system", 00:15:39.941 "dma_device_type": 1 00:15:39.941 }, 00:15:39.941 { 00:15:39.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.941 "dma_device_type": 2 00:15:39.941 } 00:15:39.941 ], 00:15:39.941 "driver_specific": { 00:15:39.941 "passthru": { 00:15:39.941 "name": "Passthru0", 00:15:39.941 "base_bdev_name": "Malloc0" 00:15:39.941 } 00:15:39.941 } 00:15:39.941 } 00:15:39.941 ]' 00:15:39.941 12:12:33 -- rpc/rpc.sh@21 -- # jq length 00:15:39.941 12:12:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:39.941 12:12:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:39.941 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.941 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.941 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.941 12:12:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:39.941 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.941 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.941 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.941 12:12:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:39.941 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.941 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:39.941 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.941 12:12:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:39.941 12:12:33 -- rpc/rpc.sh@26 -- # jq length 00:15:40.199 ************************************ 00:15:40.199 END TEST rpc_integrity 00:15:40.199 ************************************ 00:15:40.199 12:12:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:40.199 00:15:40.199 real 0m0.331s 00:15:40.199 user 0m0.231s 00:15:40.199 sys 0m0.032s 00:15:40.199 12:12:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.199 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 12:12:33 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:40.199 12:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.199 12:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.199 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 ************************************ 00:15:40.199 START TEST rpc_plugins 00:15:40.199 ************************************ 00:15:40.199 12:12:33 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:15:40.199 12:12:33 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:40.199 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.199 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.199 12:12:33 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:40.199 12:12:33 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:40.199 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.199 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.199 12:12:33 -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:40.199 { 00:15:40.199 "name": "Malloc1", 00:15:40.199 "aliases": [ 00:15:40.199 "de3c43e8-0339-4873-bd55-bd5949315292" 00:15:40.199 ], 00:15:40.199 "product_name": "Malloc disk", 00:15:40.199 "block_size": 4096, 00:15:40.199 "num_blocks": 256, 00:15:40.199 "uuid": "de3c43e8-0339-4873-bd55-bd5949315292", 00:15:40.199 "assigned_rate_limits": { 00:15:40.199 "rw_ios_per_sec": 0, 00:15:40.199 "rw_mbytes_per_sec": 0, 00:15:40.199 "r_mbytes_per_sec": 0, 00:15:40.199 "w_mbytes_per_sec": 0 00:15:40.199 }, 00:15:40.199 "claimed": false, 00:15:40.199 "zoned": false, 00:15:40.199 "supported_io_types": { 00:15:40.199 "read": true, 00:15:40.199 "write": true, 00:15:40.199 "unmap": true, 00:15:40.199 "write_zeroes": true, 00:15:40.199 "flush": true, 00:15:40.199 "reset": true, 00:15:40.199 "compare": false, 00:15:40.199 "compare_and_write": false, 00:15:40.199 "abort": true, 00:15:40.199 "nvme_admin": false, 00:15:40.199 "nvme_io": false 00:15:40.199 }, 00:15:40.199 "memory_domains": [ 00:15:40.199 { 00:15:40.199 "dma_device_id": "system", 00:15:40.199 "dma_device_type": 1 00:15:40.199 }, 00:15:40.199 { 00:15:40.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.199 "dma_device_type": 2 00:15:40.199 } 00:15:40.199 ], 00:15:40.199 "driver_specific": {} 00:15:40.199 } 00:15:40.199 ]' 00:15:40.199 12:12:33 -- rpc/rpc.sh@32 -- # jq length 00:15:40.199 12:12:33 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:40.199 12:12:33 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:40.199 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.199 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.199 12:12:33 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:40.199 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.199 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.199 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.199 12:12:33 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:40.199 12:12:33 -- rpc/rpc.sh@36 -- # jq length 00:15:40.457 ************************************ 00:15:40.457 END TEST rpc_plugins 00:15:40.457 ************************************ 00:15:40.457 12:12:33 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:40.457 00:15:40.457 real 0m0.174s 00:15:40.457 user 0m0.120s 00:15:40.457 sys 0m0.016s 00:15:40.457 12:12:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.457 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.457 12:12:33 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:40.457 12:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.457 12:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.457 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.457 ************************************ 00:15:40.457 START TEST rpc_trace_cmd_test 00:15:40.457 ************************************ 00:15:40.457 12:12:33 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:15:40.457 12:12:33 -- rpc/rpc.sh@40 -- # local info 00:15:40.457 12:12:33 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:40.457 12:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.457 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.457 12:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.457 12:12:33 -- rpc/rpc.sh@42 -- # info='{ 00:15:40.457 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58271", 00:15:40.457 "tpoint_group_mask": "0x8", 00:15:40.457 "iscsi_conn": { 00:15:40.458 "mask": "0x2", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "scsi": { 00:15:40.458 "mask": "0x4", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "bdev": { 00:15:40.458 "mask": "0x8", 00:15:40.458 "tpoint_mask": "0xffffffffffffffff" 00:15:40.458 }, 00:15:40.458 "nvmf_rdma": { 00:15:40.458 "mask": "0x10", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "nvmf_tcp": { 00:15:40.458 "mask": "0x20", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "ftl": { 00:15:40.458 "mask": "0x40", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "blobfs": { 00:15:40.458 "mask": "0x80", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "dsa": { 00:15:40.458 "mask": "0x200", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "thread": { 00:15:40.458 "mask": "0x400", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "nvme_pcie": { 00:15:40.458 "mask": "0x800", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "iaa": { 00:15:40.458 "mask": "0x1000", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "nvme_tcp": { 00:15:40.458 "mask": "0x2000", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "bdev_nvme": { 00:15:40.458 "mask": "0x4000", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 }, 00:15:40.458 "sock": { 00:15:40.458 "mask": "0x8000", 00:15:40.458 "tpoint_mask": "0x0" 00:15:40.458 } 00:15:40.458 }' 00:15:40.458 12:12:33 -- rpc/rpc.sh@43 -- # jq length 00:15:40.458 12:12:33 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:15:40.458 12:12:33 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:40.716 12:12:33 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:40.716 12:12:33 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:40.716 12:12:33 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:40.716 12:12:33 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:40.716 12:12:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:40.716 12:12:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:40.716 ************************************ 00:15:40.717 END TEST rpc_trace_cmd_test 00:15:40.717 ************************************ 00:15:40.717 12:12:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:40.717 00:15:40.717 real 0m0.263s 00:15:40.717 user 0m0.230s 00:15:40.717 sys 0m0.023s 00:15:40.717 12:12:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.717 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.717 12:12:34 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:40.717 12:12:34 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:40.717 12:12:34 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:40.717 12:12:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.717 12:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.717 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.975 ************************************ 00:15:40.975 START TEST rpc_daemon_integrity 00:15:40.975 ************************************ 00:15:40.975 12:12:34 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:15:40.975 12:12:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:40.975 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.975 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.975 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.975 12:12:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:40.975 12:12:34 -- rpc/rpc.sh@13 -- # jq length 00:15:40.975 12:12:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:40.975 12:12:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:40.975 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.975 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.975 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.975 12:12:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:40.975 12:12:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:40.975 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.975 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.975 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.975 12:12:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:40.975 { 00:15:40.975 "name": "Malloc2", 00:15:40.975 "aliases": [ 00:15:40.975 "b7dbc145-2037-48ea-aabd-db204a143298" 00:15:40.975 ], 00:15:40.975 "product_name": "Malloc disk", 00:15:40.975 "block_size": 512, 00:15:40.975 "num_blocks": 16384, 00:15:40.975 "uuid": "b7dbc145-2037-48ea-aabd-db204a143298", 00:15:40.975 "assigned_rate_limits": { 00:15:40.975 "rw_ios_per_sec": 0, 00:15:40.975 "rw_mbytes_per_sec": 0, 00:15:40.975 "r_mbytes_per_sec": 0, 00:15:40.975 "w_mbytes_per_sec": 0 00:15:40.975 }, 00:15:40.975 "claimed": false, 00:15:40.975 "zoned": false, 00:15:40.975 "supported_io_types": { 00:15:40.975 "read": true, 00:15:40.975 "write": true, 00:15:40.975 "unmap": true, 00:15:40.975 "write_zeroes": true, 00:15:40.975 "flush": true, 00:15:40.975 "reset": true, 00:15:40.975 "compare": false, 00:15:40.975 "compare_and_write": false, 00:15:40.975 "abort": true, 00:15:40.975 "nvme_admin": false, 00:15:40.975 "nvme_io": false 00:15:40.975 }, 00:15:40.975 "memory_domains": [ 00:15:40.975 { 00:15:40.975 "dma_device_id": "system", 00:15:40.975 "dma_device_type": 1 00:15:40.975 }, 00:15:40.975 { 00:15:40.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.975 "dma_device_type": 2 00:15:40.975 } 00:15:40.975 ], 00:15:40.975 "driver_specific": {} 00:15:40.975 } 00:15:40.975 ]' 00:15:40.975 12:12:34 -- rpc/rpc.sh@17 -- # jq length 00:15:40.975 12:12:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:40.975 12:12:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:40.975 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.975 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.975 [2024-04-26 12:12:34.359503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:40.975 [2024-04-26 12:12:34.359565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.975 [2024-04-26 12:12:34.359587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164fd10 00:15:40.975 [2024-04-26 12:12:34.359597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.975 [2024-04-26 12:12:34.361395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.975 [2024-04-26 12:12:34.361431] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:40.975 Passthru0 00:15:40.975 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.975 12:12:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:40.975 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.975 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.975 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.975 12:12:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:40.975 { 00:15:40.975 "name": "Malloc2", 00:15:40.975 "aliases": [ 00:15:40.975 "b7dbc145-2037-48ea-aabd-db204a143298" 00:15:40.975 ], 00:15:40.975 "product_name": "Malloc disk", 00:15:40.975 "block_size": 512, 00:15:40.975 "num_blocks": 16384, 00:15:40.975 "uuid": "b7dbc145-2037-48ea-aabd-db204a143298", 00:15:40.975 "assigned_rate_limits": { 00:15:40.975 "rw_ios_per_sec": 0, 00:15:40.975 "rw_mbytes_per_sec": 0, 00:15:40.975 "r_mbytes_per_sec": 0, 00:15:40.975 "w_mbytes_per_sec": 0 00:15:40.975 }, 00:15:40.975 "claimed": true, 00:15:40.975 "claim_type": "exclusive_write", 00:15:40.975 "zoned": false, 00:15:40.975 "supported_io_types": { 00:15:40.975 "read": true, 00:15:40.975 "write": true, 00:15:40.975 "unmap": true, 00:15:40.975 "write_zeroes": true, 00:15:40.975 "flush": true, 00:15:40.975 "reset": true, 00:15:40.975 "compare": false, 00:15:40.975 "compare_and_write": false, 00:15:40.975 "abort": true, 00:15:40.975 "nvme_admin": false, 00:15:40.975 "nvme_io": false 00:15:40.976 }, 00:15:40.976 "memory_domains": [ 00:15:40.976 { 00:15:40.976 "dma_device_id": "system", 00:15:40.976 "dma_device_type": 1 00:15:40.976 }, 00:15:40.976 { 00:15:40.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.976 "dma_device_type": 2 00:15:40.976 } 00:15:40.976 ], 00:15:40.976 "driver_specific": {} 00:15:40.976 }, 00:15:40.976 { 00:15:40.976 "name": "Passthru0", 00:15:40.976 "aliases": [ 00:15:40.976 "568bbfb0-4d73-57d7-aecc-b3cc3ee114e4" 00:15:40.976 ], 00:15:40.976 "product_name": "passthru", 00:15:40.976 "block_size": 512, 00:15:40.976 "num_blocks": 16384, 00:15:40.976 "uuid": "568bbfb0-4d73-57d7-aecc-b3cc3ee114e4", 00:15:40.976 "assigned_rate_limits": { 00:15:40.976 "rw_ios_per_sec": 0, 00:15:40.976 "rw_mbytes_per_sec": 0, 00:15:40.976 "r_mbytes_per_sec": 0, 00:15:40.976 "w_mbytes_per_sec": 0 00:15:40.976 }, 00:15:40.976 "claimed": false, 00:15:40.976 "zoned": false, 00:15:40.976 "supported_io_types": { 00:15:40.976 "read": true, 00:15:40.976 "write": true, 00:15:40.976 "unmap": true, 00:15:40.976 "write_zeroes": true, 00:15:40.976 "flush": true, 00:15:40.976 "reset": true, 00:15:40.976 "compare": false, 00:15:40.976 "compare_and_write": false, 00:15:40.976 "abort": true, 00:15:40.976 "nvme_admin": false, 00:15:40.976 "nvme_io": false 00:15:40.976 }, 00:15:40.976 "memory_domains": [ 00:15:40.976 { 00:15:40.976 "dma_device_id": "system", 00:15:40.976 "dma_device_type": 1 00:15:40.976 }, 00:15:40.976 { 00:15:40.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.976 "dma_device_type": 2 00:15:40.976 } 00:15:40.976 ], 00:15:40.976 "driver_specific": { 00:15:40.976 "passthru": { 00:15:40.976 "name": "Passthru0", 00:15:40.976 "base_bdev_name": "Malloc2" 00:15:40.976 } 00:15:40.976 } 00:15:40.976 } 00:15:40.976 ]' 00:15:40.976 12:12:34 -- rpc/rpc.sh@21 -- # jq length 00:15:41.234 12:12:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:41.234 12:12:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:41.234 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.234 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.234 12:12:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:41.234 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.234 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.234 12:12:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:41.234 12:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.234 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 12:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.234 12:12:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:41.234 12:12:34 -- rpc/rpc.sh@26 -- # jq length 00:15:41.234 ************************************ 00:15:41.234 END TEST rpc_daemon_integrity 00:15:41.234 ************************************ 00:15:41.234 12:12:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:41.234 00:15:41.234 real 0m0.322s 00:15:41.234 user 0m0.224s 00:15:41.234 sys 0m0.035s 00:15:41.234 12:12:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.234 12:12:34 -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 12:12:34 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:41.234 12:12:34 -- rpc/rpc.sh@84 -- # killprocess 58271 00:15:41.234 12:12:34 -- common/autotest_common.sh@936 -- # '[' -z 58271 ']' 00:15:41.234 12:12:34 -- common/autotest_common.sh@940 -- # kill -0 58271 00:15:41.234 12:12:34 -- common/autotest_common.sh@941 -- # uname 00:15:41.234 12:12:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.234 12:12:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58271 00:15:41.234 killing process with pid 58271 00:15:41.234 12:12:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.234 12:12:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.234 12:12:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58271' 00:15:41.234 12:12:34 -- common/autotest_common.sh@955 -- # kill 58271 00:15:41.234 12:12:34 -- common/autotest_common.sh@960 -- # wait 58271 00:15:41.800 00:15:41.800 real 0m3.086s 00:15:41.800 user 0m4.012s 00:15:41.800 sys 0m0.761s 00:15:41.800 12:12:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.800 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:15:41.800 ************************************ 00:15:41.800 END TEST rpc 00:15:41.800 ************************************ 00:15:41.800 12:12:35 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:41.800 12:12:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:41.800 12:12:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.800 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:15:41.800 ************************************ 00:15:41.800 START TEST skip_rpc 00:15:41.800 ************************************ 00:15:41.800 12:12:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:41.800 * Looking for test storage... 00:15:41.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:41.800 12:12:35 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:41.801 12:12:35 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:41.801 12:12:35 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:41.801 12:12:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:41.801 12:12:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.801 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:15:42.058 ************************************ 00:15:42.058 START TEST skip_rpc 00:15:42.058 ************************************ 00:15:42.058 12:12:35 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:15:42.058 12:12:35 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58495 00:15:42.058 12:12:35 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:42.058 12:12:35 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:42.058 12:12:35 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:42.058 [2024-04-26 12:12:35.402359] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:15:42.058 [2024-04-26 12:12:35.402469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58495 ] 00:15:42.316 [2024-04-26 12:12:35.543542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.316 [2024-04-26 12:12:35.636273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:47.592 12:12:40 -- common/autotest_common.sh@638 -- # local es=0 00:15:47.592 12:12:40 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:47.592 12:12:40 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:47.592 12:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.592 12:12:40 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:47.592 12:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.592 12:12:40 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:15:47.592 12:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.592 12:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.592 12:12:40 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:47.592 12:12:40 -- common/autotest_common.sh@641 -- # es=1 00:15:47.592 12:12:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:47.592 12:12:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:47.592 12:12:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@23 -- # killprocess 58495 00:15:47.592 12:12:40 -- common/autotest_common.sh@936 -- # '[' -z 58495 ']' 00:15:47.592 12:12:40 -- common/autotest_common.sh@940 -- # kill -0 58495 00:15:47.592 12:12:40 -- common/autotest_common.sh@941 -- # uname 00:15:47.592 12:12:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.592 12:12:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58495 00:15:47.592 12:12:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:47.592 12:12:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:47.592 12:12:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58495' 00:15:47.592 killing process with pid 58495 00:15:47.592 12:12:40 -- common/autotest_common.sh@955 -- # kill 58495 00:15:47.592 12:12:40 -- common/autotest_common.sh@960 -- # wait 58495 00:15:47.592 00:15:47.592 real 0m5.500s 00:15:47.592 user 0m5.100s 00:15:47.592 sys 0m0.300s 00:15:47.592 12:12:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.592 ************************************ 00:15:47.592 END TEST skip_rpc 00:15:47.592 ************************************ 00:15:47.592 12:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:47.592 12:12:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:47.592 12:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.592 12:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.592 ************************************ 00:15:47.592 START TEST skip_rpc_with_json 00:15:47.592 ************************************ 00:15:47.592 12:12:40 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58584 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:47.592 12:12:40 -- rpc/skip_rpc.sh@31 -- # waitforlisten 58584 00:15:47.592 12:12:40 -- common/autotest_common.sh@817 -- # '[' -z 58584 ']' 00:15:47.592 12:12:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.592 12:12:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.593 12:12:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.593 12:12:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.593 12:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:47.593 [2024-04-26 12:12:41.024095] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:15:47.593 [2024-04-26 12:12:41.024212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:15:47.865 [2024-04-26 12:12:41.165279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.865 [2024-04-26 12:12:41.276854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.801 12:12:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:48.801 12:12:41 -- common/autotest_common.sh@850 -- # return 0 00:15:48.801 12:12:41 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:48.801 12:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.801 12:12:41 -- common/autotest_common.sh@10 -- # set +x 00:15:48.801 [2024-04-26 12:12:41.994230] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:48.801 request: 00:15:48.801 { 00:15:48.801 "trtype": "tcp", 00:15:48.801 "method": "nvmf_get_transports", 00:15:48.801 "req_id": 1 00:15:48.801 } 00:15:48.801 Got JSON-RPC error response 00:15:48.801 response: 00:15:48.801 { 00:15:48.801 "code": -19, 00:15:48.801 "message": "No such device" 00:15:48.801 } 00:15:48.801 12:12:41 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:48.801 12:12:41 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:48.801 12:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.801 12:12:41 -- common/autotest_common.sh@10 -- # set +x 00:15:48.801 [2024-04-26 12:12:42.006335] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.801 12:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.801 12:12:42 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:48.801 12:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.801 12:12:42 -- common/autotest_common.sh@10 -- # set +x 00:15:48.801 12:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.801 12:12:42 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:48.801 { 00:15:48.801 "subsystems": [ 00:15:48.801 { 00:15:48.801 "subsystem": "keyring", 00:15:48.801 "config": [] 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "iobuf", 00:15:48.801 "config": [ 00:15:48.801 { 00:15:48.801 "method": "iobuf_set_options", 00:15:48.801 "params": { 00:15:48.801 "small_pool_count": 8192, 00:15:48.801 "large_pool_count": 1024, 00:15:48.801 "small_bufsize": 8192, 00:15:48.801 "large_bufsize": 135168 00:15:48.801 } 00:15:48.801 } 00:15:48.801 ] 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "sock", 00:15:48.801 "config": [ 00:15:48.801 { 00:15:48.801 "method": "sock_impl_set_options", 00:15:48.801 "params": { 00:15:48.801 "impl_name": "uring", 00:15:48.801 "recv_buf_size": 2097152, 00:15:48.801 "send_buf_size": 2097152, 00:15:48.801 "enable_recv_pipe": true, 00:15:48.801 "enable_quickack": false, 00:15:48.801 "enable_placement_id": 0, 00:15:48.801 "enable_zerocopy_send_server": false, 00:15:48.801 "enable_zerocopy_send_client": false, 00:15:48.801 "zerocopy_threshold": 0, 00:15:48.801 "tls_version": 0, 00:15:48.801 "enable_ktls": false 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "sock_impl_set_options", 00:15:48.801 "params": { 00:15:48.801 "impl_name": "posix", 00:15:48.801 "recv_buf_size": 2097152, 00:15:48.801 "send_buf_size": 2097152, 00:15:48.801 "enable_recv_pipe": true, 00:15:48.801 "enable_quickack": false, 00:15:48.801 "enable_placement_id": 0, 00:15:48.801 "enable_zerocopy_send_server": true, 00:15:48.801 "enable_zerocopy_send_client": false, 00:15:48.801 "zerocopy_threshold": 0, 00:15:48.801 "tls_version": 0, 00:15:48.801 "enable_ktls": false 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "sock_impl_set_options", 00:15:48.801 "params": { 00:15:48.801 "impl_name": "ssl", 00:15:48.801 "recv_buf_size": 4096, 00:15:48.801 "send_buf_size": 4096, 00:15:48.801 "enable_recv_pipe": true, 00:15:48.801 "enable_quickack": false, 00:15:48.801 "enable_placement_id": 0, 00:15:48.801 "enable_zerocopy_send_server": true, 00:15:48.801 "enable_zerocopy_send_client": false, 00:15:48.801 "zerocopy_threshold": 0, 00:15:48.801 "tls_version": 0, 00:15:48.801 "enable_ktls": false 00:15:48.801 } 00:15:48.801 } 00:15:48.801 ] 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "vmd", 00:15:48.801 "config": [] 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "accel", 00:15:48.801 "config": [ 00:15:48.801 { 00:15:48.801 "method": "accel_set_options", 00:15:48.801 "params": { 00:15:48.801 "small_cache_size": 128, 00:15:48.801 "large_cache_size": 16, 00:15:48.801 "task_count": 2048, 00:15:48.801 "sequence_count": 2048, 00:15:48.801 "buf_count": 2048 00:15:48.801 } 00:15:48.801 } 00:15:48.801 ] 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "bdev", 00:15:48.801 "config": [ 00:15:48.801 { 00:15:48.801 "method": "bdev_set_options", 00:15:48.801 "params": { 00:15:48.801 "bdev_io_pool_size": 65535, 00:15:48.801 "bdev_io_cache_size": 256, 00:15:48.801 "bdev_auto_examine": true, 00:15:48.801 "iobuf_small_cache_size": 128, 00:15:48.801 "iobuf_large_cache_size": 16 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "bdev_raid_set_options", 00:15:48.801 "params": { 00:15:48.801 "process_window_size_kb": 1024 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "bdev_iscsi_set_options", 00:15:48.801 "params": { 00:15:48.801 "timeout_sec": 30 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "bdev_nvme_set_options", 00:15:48.801 "params": { 00:15:48.801 "action_on_timeout": "none", 00:15:48.801 "timeout_us": 0, 00:15:48.801 "timeout_admin_us": 0, 00:15:48.801 "keep_alive_timeout_ms": 10000, 00:15:48.801 "arbitration_burst": 0, 00:15:48.801 "low_priority_weight": 0, 00:15:48.801 "medium_priority_weight": 0, 00:15:48.801 "high_priority_weight": 0, 00:15:48.801 "nvme_adminq_poll_period_us": 10000, 00:15:48.801 "nvme_ioq_poll_period_us": 0, 00:15:48.801 "io_queue_requests": 0, 00:15:48.801 "delay_cmd_submit": true, 00:15:48.801 "transport_retry_count": 4, 00:15:48.801 "bdev_retry_count": 3, 00:15:48.801 "transport_ack_timeout": 0, 00:15:48.801 "ctrlr_loss_timeout_sec": 0, 00:15:48.801 "reconnect_delay_sec": 0, 00:15:48.801 "fast_io_fail_timeout_sec": 0, 00:15:48.801 "disable_auto_failback": false, 00:15:48.801 "generate_uuids": false, 00:15:48.801 "transport_tos": 0, 00:15:48.801 "nvme_error_stat": false, 00:15:48.801 "rdma_srq_size": 0, 00:15:48.801 "io_path_stat": false, 00:15:48.801 "allow_accel_sequence": false, 00:15:48.801 "rdma_max_cq_size": 0, 00:15:48.801 "rdma_cm_event_timeout_ms": 0, 00:15:48.801 "dhchap_digests": [ 00:15:48.801 "sha256", 00:15:48.801 "sha384", 00:15:48.801 "sha512" 00:15:48.801 ], 00:15:48.801 "dhchap_dhgroups": [ 00:15:48.801 "null", 00:15:48.801 "ffdhe2048", 00:15:48.801 "ffdhe3072", 00:15:48.801 "ffdhe4096", 00:15:48.801 "ffdhe6144", 00:15:48.801 "ffdhe8192" 00:15:48.801 ] 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "bdev_nvme_set_hotplug", 00:15:48.801 "params": { 00:15:48.801 "period_us": 100000, 00:15:48.801 "enable": false 00:15:48.801 } 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "method": "bdev_wait_for_examine" 00:15:48.801 } 00:15:48.801 ] 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "scsi", 00:15:48.801 "config": null 00:15:48.801 }, 00:15:48.801 { 00:15:48.801 "subsystem": "scheduler", 00:15:48.801 "config": [ 00:15:48.801 { 00:15:48.801 "method": "framework_set_scheduler", 00:15:48.801 "params": { 00:15:48.801 "name": "static" 00:15:48.801 } 00:15:48.801 } 00:15:48.801 ] 00:15:48.801 }, 00:15:48.801 { 00:15:48.802 "subsystem": "vhost_scsi", 00:15:48.802 "config": [] 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "subsystem": "vhost_blk", 00:15:48.802 "config": [] 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "subsystem": "ublk", 00:15:48.802 "config": [] 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "subsystem": "nbd", 00:15:48.802 "config": [] 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "subsystem": "nvmf", 00:15:48.802 "config": [ 00:15:48.802 { 00:15:48.802 "method": "nvmf_set_config", 00:15:48.802 "params": { 00:15:48.802 "discovery_filter": "match_any", 00:15:48.802 "admin_cmd_passthru": { 00:15:48.802 "identify_ctrlr": false 00:15:48.802 } 00:15:48.802 } 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "method": "nvmf_set_max_subsystems", 00:15:48.802 "params": { 00:15:48.802 "max_subsystems": 1024 00:15:48.802 } 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "method": "nvmf_set_crdt", 00:15:48.802 "params": { 00:15:48.802 "crdt1": 0, 00:15:48.802 "crdt2": 0, 00:15:48.802 "crdt3": 0 00:15:48.802 } 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "method": "nvmf_create_transport", 00:15:48.802 "params": { 00:15:48.802 "trtype": "TCP", 00:15:48.802 "max_queue_depth": 128, 00:15:48.802 "max_io_qpairs_per_ctrlr": 127, 00:15:48.802 "in_capsule_data_size": 4096, 00:15:48.802 "max_io_size": 131072, 00:15:48.802 "io_unit_size": 131072, 00:15:48.802 "max_aq_depth": 128, 00:15:48.802 "num_shared_buffers": 511, 00:15:48.802 "buf_cache_size": 4294967295, 00:15:48.802 "dif_insert_or_strip": false, 00:15:48.802 "zcopy": false, 00:15:48.802 "c2h_success": true, 00:15:48.802 "sock_priority": 0, 00:15:48.802 "abort_timeout_sec": 1, 00:15:48.802 "ack_timeout": 0, 00:15:48.802 "data_wr_pool_size": 0 00:15:48.802 } 00:15:48.802 } 00:15:48.802 ] 00:15:48.802 }, 00:15:48.802 { 00:15:48.802 "subsystem": "iscsi", 00:15:48.802 "config": [ 00:15:48.802 { 00:15:48.802 "method": "iscsi_set_options", 00:15:48.802 "params": { 00:15:48.802 "node_base": "iqn.2016-06.io.spdk", 00:15:48.802 "max_sessions": 128, 00:15:48.802 "max_connections_per_session": 2, 00:15:48.802 "max_queue_depth": 64, 00:15:48.802 "default_time2wait": 2, 00:15:48.802 "default_time2retain": 20, 00:15:48.802 "first_burst_length": 8192, 00:15:48.802 "immediate_data": true, 00:15:48.802 "allow_duplicated_isid": false, 00:15:48.802 "error_recovery_level": 0, 00:15:48.802 "nop_timeout": 60, 00:15:48.802 "nop_in_interval": 30, 00:15:48.802 "disable_chap": false, 00:15:48.802 "require_chap": false, 00:15:48.802 "mutual_chap": false, 00:15:48.802 "chap_group": 0, 00:15:48.802 "max_large_datain_per_connection": 64, 00:15:48.802 "max_r2t_per_connection": 4, 00:15:48.802 "pdu_pool_size": 36864, 00:15:48.802 "immediate_data_pool_size": 16384, 00:15:48.802 "data_out_pool_size": 2048 00:15:48.802 } 00:15:48.802 } 00:15:48.802 ] 00:15:48.802 } 00:15:48.802 ] 00:15:48.802 } 00:15:48.802 12:12:42 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:48.802 12:12:42 -- rpc/skip_rpc.sh@40 -- # killprocess 58584 00:15:48.802 12:12:42 -- common/autotest_common.sh@936 -- # '[' -z 58584 ']' 00:15:48.802 12:12:42 -- common/autotest_common.sh@940 -- # kill -0 58584 00:15:48.802 12:12:42 -- common/autotest_common.sh@941 -- # uname 00:15:48.802 12:12:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.802 12:12:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58584 00:15:48.802 12:12:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.802 killing process with pid 58584 00:15:48.802 12:12:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.802 12:12:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58584' 00:15:48.802 12:12:42 -- common/autotest_common.sh@955 -- # kill 58584 00:15:48.802 12:12:42 -- common/autotest_common.sh@960 -- # wait 58584 00:15:49.370 12:12:42 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58613 00:15:49.370 12:12:42 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:49.370 12:12:42 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:54.636 12:12:47 -- rpc/skip_rpc.sh@50 -- # killprocess 58613 00:15:54.636 12:12:47 -- common/autotest_common.sh@936 -- # '[' -z 58613 ']' 00:15:54.636 12:12:47 -- common/autotest_common.sh@940 -- # kill -0 58613 00:15:54.636 12:12:47 -- common/autotest_common.sh@941 -- # uname 00:15:54.636 12:12:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:54.636 12:12:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58613 00:15:54.636 12:12:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:54.636 12:12:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:54.636 killing process with pid 58613 00:15:54.636 12:12:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58613' 00:15:54.636 12:12:47 -- common/autotest_common.sh@955 -- # kill 58613 00:15:54.636 12:12:47 -- common/autotest_common.sh@960 -- # wait 58613 00:15:54.894 12:12:48 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:54.894 12:12:48 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:54.894 00:15:54.894 real 0m7.180s 00:15:54.894 user 0m6.861s 00:15:54.894 sys 0m0.699s 00:15:54.894 12:12:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:54.894 12:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.894 ************************************ 00:15:54.894 END TEST skip_rpc_with_json 00:15:54.894 ************************************ 00:15:54.894 12:12:48 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:54.894 12:12:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:54.894 12:12:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.894 12:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.894 ************************************ 00:15:54.894 START TEST skip_rpc_with_delay 00:15:54.894 ************************************ 00:15:54.894 12:12:48 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:15:54.894 12:12:48 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:54.894 12:12:48 -- common/autotest_common.sh@638 -- # local es=0 00:15:54.894 12:12:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:54.894 12:12:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.894 12:12:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:54.894 12:12:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.894 12:12:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:54.894 12:12:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.894 12:12:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:54.894 12:12:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.894 12:12:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:54.894 12:12:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:54.894 [2024-04-26 12:12:48.329734] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:54.894 [2024-04-26 12:12:48.329895] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:54.894 12:12:48 -- common/autotest_common.sh@641 -- # es=1 00:15:54.894 12:12:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:54.894 12:12:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:54.894 12:12:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:54.894 00:15:54.894 real 0m0.088s 00:15:54.894 user 0m0.046s 00:15:54.894 sys 0m0.041s 00:15:54.894 12:12:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:54.894 12:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.894 ************************************ 00:15:54.894 END TEST skip_rpc_with_delay 00:15:54.894 ************************************ 00:15:55.150 12:12:48 -- rpc/skip_rpc.sh@77 -- # uname 00:15:55.150 12:12:48 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:55.150 12:12:48 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:55.150 12:12:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:55.150 12:12:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:55.150 12:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:55.150 ************************************ 00:15:55.150 START TEST exit_on_failed_rpc_init 00:15:55.150 ************************************ 00:15:55.150 12:12:48 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:15:55.150 12:12:48 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58738 00:15:55.150 12:12:48 -- rpc/skip_rpc.sh@63 -- # waitforlisten 58738 00:15:55.150 12:12:48 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:55.150 12:12:48 -- common/autotest_common.sh@817 -- # '[' -z 58738 ']' 00:15:55.150 12:12:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.150 12:12:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.150 12:12:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.150 12:12:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.150 12:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:55.150 [2024-04-26 12:12:48.534533] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:15:55.150 [2024-04-26 12:12:48.534640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58738 ] 00:15:55.442 [2024-04-26 12:12:48.673323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.442 [2024-04-26 12:12:48.786986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.377 12:12:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.377 12:12:49 -- common/autotest_common.sh@850 -- # return 0 00:15:56.377 12:12:49 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:56.377 12:12:49 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:56.377 12:12:49 -- common/autotest_common.sh@638 -- # local es=0 00:15:56.377 12:12:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:56.377 12:12:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.377 12:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:56.377 12:12:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.377 12:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:56.377 12:12:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.377 12:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:56.377 12:12:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.377 12:12:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:56.377 12:12:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:56.377 [2024-04-26 12:12:49.572706] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:15:56.377 [2024-04-26 12:12:49.572816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 00:15:56.377 [2024-04-26 12:12:49.709621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.634 [2024-04-26 12:12:49.846394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.634 [2024-04-26 12:12:49.846514] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:56.634 [2024-04-26 12:12:49.846531] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:56.634 [2024-04-26 12:12:49.846543] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:56.634 12:12:49 -- common/autotest_common.sh@641 -- # es=234 00:15:56.634 12:12:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:56.634 12:12:49 -- common/autotest_common.sh@650 -- # es=106 00:15:56.634 12:12:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:15:56.634 12:12:49 -- common/autotest_common.sh@658 -- # es=1 00:15:56.634 12:12:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:56.634 12:12:49 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:56.634 12:12:49 -- rpc/skip_rpc.sh@70 -- # killprocess 58738 00:15:56.634 12:12:49 -- common/autotest_common.sh@936 -- # '[' -z 58738 ']' 00:15:56.634 12:12:49 -- common/autotest_common.sh@940 -- # kill -0 58738 00:15:56.634 12:12:49 -- common/autotest_common.sh@941 -- # uname 00:15:56.634 12:12:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.634 12:12:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58738 00:15:56.634 12:12:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.634 12:12:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.634 killing process with pid 58738 00:15:56.634 12:12:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58738' 00:15:56.634 12:12:49 -- common/autotest_common.sh@955 -- # kill 58738 00:15:56.634 12:12:49 -- common/autotest_common.sh@960 -- # wait 58738 00:15:57.198 00:15:57.198 real 0m1.989s 00:15:57.198 user 0m2.316s 00:15:57.198 sys 0m0.460s 00:15:57.198 12:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.198 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 ************************************ 00:15:57.198 END TEST exit_on_failed_rpc_init 00:15:57.198 ************************************ 00:15:57.198 12:12:50 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:57.198 00:15:57.198 real 0m15.354s 00:15:57.198 user 0m14.537s 00:15:57.198 sys 0m1.808s 00:15:57.198 12:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.198 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 ************************************ 00:15:57.198 END TEST skip_rpc 00:15:57.198 ************************************ 00:15:57.198 12:12:50 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:57.198 12:12:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:57.198 12:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.198 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 ************************************ 00:15:57.198 START TEST rpc_client 00:15:57.198 ************************************ 00:15:57.198 12:12:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:57.455 * Looking for test storage... 00:15:57.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:57.455 12:12:50 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:57.455 OK 00:15:57.455 12:12:50 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:57.455 00:15:57.455 real 0m0.111s 00:15:57.455 user 0m0.054s 00:15:57.455 sys 0m0.062s 00:15:57.455 12:12:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.455 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.455 ************************************ 00:15:57.455 END TEST rpc_client 00:15:57.455 ************************************ 00:15:57.455 12:12:50 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:57.455 12:12:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:57.455 12:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.455 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.455 ************************************ 00:15:57.455 START TEST json_config 00:15:57.455 ************************************ 00:15:57.455 12:12:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:57.455 12:12:50 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.455 12:12:50 -- nvmf/common.sh@7 -- # uname -s 00:15:57.712 12:12:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.712 12:12:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.712 12:12:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.712 12:12:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.712 12:12:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.712 12:12:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.712 12:12:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.712 12:12:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.712 12:12:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.712 12:12:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.712 12:12:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:15:57.712 12:12:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:15:57.712 12:12:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.712 12:12:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.712 12:12:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:57.712 12:12:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.712 12:12:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.712 12:12:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.712 12:12:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.712 12:12:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.712 12:12:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.712 12:12:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.712 12:12:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.712 12:12:50 -- paths/export.sh@5 -- # export PATH 00:15:57.713 12:12:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.713 12:12:50 -- nvmf/common.sh@47 -- # : 0 00:15:57.713 12:12:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.713 12:12:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.713 12:12:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.713 12:12:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.713 12:12:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.713 12:12:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.713 12:12:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.713 12:12:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.713 12:12:50 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:57.713 12:12:50 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:57.713 12:12:50 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:57.713 12:12:50 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:57.713 12:12:50 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:57.713 12:12:50 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:57.713 12:12:50 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:57.713 12:12:50 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:57.713 12:12:50 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:57.713 12:12:50 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:57.713 12:12:50 -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:57.713 12:12:50 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:57.713 12:12:50 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:57.713 12:12:50 -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:57.713 12:12:50 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:57.713 INFO: JSON configuration test init 00:15:57.713 12:12:50 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:15:57.713 12:12:50 -- json_config/json_config.sh@357 -- # json_config_test_init 00:15:57.713 12:12:50 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:15:57.713 12:12:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:57.713 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 12:12:50 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:15:57.713 12:12:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:57.713 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 12:12:50 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:15:57.713 12:12:50 -- json_config/common.sh@9 -- # local app=target 00:15:57.713 12:12:50 -- json_config/common.sh@10 -- # shift 00:15:57.713 12:12:50 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:57.713 12:12:50 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:57.713 12:12:50 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:57.713 12:12:50 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:57.713 12:12:50 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:57.713 12:12:50 -- json_config/common.sh@22 -- # app_pid["$app"]=58884 00:15:57.713 Waiting for target to run... 00:15:57.713 12:12:50 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:57.713 12:12:50 -- json_config/common.sh@25 -- # waitforlisten 58884 /var/tmp/spdk_tgt.sock 00:15:57.713 12:12:50 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:57.713 12:12:50 -- common/autotest_common.sh@817 -- # '[' -z 58884 ']' 00:15:57.713 12:12:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:57.713 12:12:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:57.713 12:12:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:57.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:57.713 12:12:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:57.713 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 [2024-04-26 12:12:51.012830] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:15:57.713 [2024-04-26 12:12:51.012943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58884 ] 00:15:58.277 [2024-04-26 12:12:51.470366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.277 [2024-04-26 12:12:51.575303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.843 12:12:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:58.843 00:15:58.843 12:12:52 -- common/autotest_common.sh@850 -- # return 0 00:15:58.843 12:12:52 -- json_config/common.sh@26 -- # echo '' 00:15:58.843 12:12:52 -- json_config/json_config.sh@269 -- # create_accel_config 00:15:58.843 12:12:52 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:15:58.843 12:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:58.843 12:12:52 -- common/autotest_common.sh@10 -- # set +x 00:15:58.843 12:12:52 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:15:58.843 12:12:52 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:15:58.843 12:12:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:58.843 12:12:52 -- common/autotest_common.sh@10 -- # set +x 00:15:58.843 12:12:52 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:58.843 12:12:52 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:15:58.843 12:12:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:59.410 12:12:52 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:15:59.410 12:12:52 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:59.410 12:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:59.410 12:12:52 -- common/autotest_common.sh@10 -- # set +x 00:15:59.410 12:12:52 -- json_config/json_config.sh@45 -- # local ret=0 00:15:59.410 12:12:52 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:59.410 12:12:52 -- json_config/json_config.sh@46 -- # local enabled_types 00:15:59.410 12:12:52 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:15:59.410 12:12:52 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:15:59.410 12:12:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:59.410 12:12:52 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:15:59.410 12:12:52 -- json_config/json_config.sh@48 -- # local get_types 00:15:59.410 12:12:52 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:15:59.410 12:12:52 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:15:59.410 12:12:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:59.410 12:12:52 -- common/autotest_common.sh@10 -- # set +x 00:15:59.668 12:12:52 -- json_config/json_config.sh@55 -- # return 0 00:15:59.668 12:12:52 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:15:59.668 12:12:52 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:15:59.668 12:12:52 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:15:59.668 12:12:52 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:15:59.668 12:12:52 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:15:59.668 12:12:52 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:15:59.668 12:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:59.668 12:12:52 -- common/autotest_common.sh@10 -- # set +x 00:15:59.668 12:12:52 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:15:59.668 12:12:52 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:15:59.668 12:12:52 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:15:59.668 12:12:52 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:59.668 12:12:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:59.668 MallocForNvmf0 00:15:59.932 12:12:53 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:59.932 12:12:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:16:00.194 MallocForNvmf1 00:16:00.194 12:12:53 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:16:00.194 12:12:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:16:00.194 [2024-04-26 12:12:53.639143] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.194 12:12:53 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:00.194 12:12:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:00.453 12:12:53 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:16:00.453 12:12:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:16:00.711 12:12:54 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:16:00.711 12:12:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:16:00.970 12:12:54 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:16:00.970 12:12:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:16:01.229 [2024-04-26 12:12:54.583740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:01.229 12:12:54 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:16:01.229 12:12:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:01.229 12:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.229 12:12:54 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:16:01.229 12:12:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:01.229 12:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.229 12:12:54 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:16:01.229 12:12:54 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:01.229 12:12:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:01.488 MallocBdevForConfigChangeCheck 00:16:01.488 12:12:54 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:16:01.488 12:12:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:01.488 12:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.747 12:12:54 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:16:01.747 12:12:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:02.005 INFO: shutting down applications... 00:16:02.005 12:12:55 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:16:02.005 12:12:55 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:16:02.005 12:12:55 -- json_config/json_config.sh@368 -- # json_config_clear target 00:16:02.005 12:12:55 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:16:02.005 12:12:55 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:16:02.263 Calling clear_iscsi_subsystem 00:16:02.263 Calling clear_nvmf_subsystem 00:16:02.263 Calling clear_nbd_subsystem 00:16:02.263 Calling clear_ublk_subsystem 00:16:02.263 Calling clear_vhost_blk_subsystem 00:16:02.263 Calling clear_vhost_scsi_subsystem 00:16:02.263 Calling clear_bdev_subsystem 00:16:02.263 12:12:55 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:16:02.263 12:12:55 -- json_config/json_config.sh@343 -- # count=100 00:16:02.263 12:12:55 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:16:02.263 12:12:55 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:02.263 12:12:55 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:16:02.263 12:12:55 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:16:02.845 12:12:56 -- json_config/json_config.sh@345 -- # break 00:16:02.845 12:12:56 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:16:02.845 12:12:56 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:16:02.845 12:12:56 -- json_config/common.sh@31 -- # local app=target 00:16:02.845 12:12:56 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:02.845 12:12:56 -- json_config/common.sh@35 -- # [[ -n 58884 ]] 00:16:02.845 12:12:56 -- json_config/common.sh@38 -- # kill -SIGINT 58884 00:16:02.845 12:12:56 -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:02.845 12:12:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:02.845 12:12:56 -- json_config/common.sh@41 -- # kill -0 58884 00:16:02.845 12:12:56 -- json_config/common.sh@45 -- # sleep 0.5 00:16:03.104 12:12:56 -- json_config/common.sh@40 -- # (( i++ )) 00:16:03.104 12:12:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:03.104 12:12:56 -- json_config/common.sh@41 -- # kill -0 58884 00:16:03.104 SPDK target shutdown done 00:16:03.104 INFO: relaunching applications... 00:16:03.104 12:12:56 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:03.104 12:12:56 -- json_config/common.sh@43 -- # break 00:16:03.104 12:12:56 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:03.104 12:12:56 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:03.104 12:12:56 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:16:03.104 12:12:56 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:03.104 12:12:56 -- json_config/common.sh@9 -- # local app=target 00:16:03.104 12:12:56 -- json_config/common.sh@10 -- # shift 00:16:03.104 12:12:56 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:03.104 12:12:56 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:03.104 Waiting for target to run... 00:16:03.104 12:12:56 -- json_config/common.sh@15 -- # local app_extra_params= 00:16:03.104 12:12:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:03.104 12:12:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:03.104 12:12:56 -- json_config/common.sh@22 -- # app_pid["$app"]=59080 00:16:03.104 12:12:56 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:03.104 12:12:56 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:03.104 12:12:56 -- json_config/common.sh@25 -- # waitforlisten 59080 /var/tmp/spdk_tgt.sock 00:16:03.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:03.362 12:12:56 -- common/autotest_common.sh@817 -- # '[' -z 59080 ']' 00:16:03.362 12:12:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:03.362 12:12:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:03.362 12:12:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:03.362 12:12:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:03.362 12:12:56 -- common/autotest_common.sh@10 -- # set +x 00:16:03.362 [2024-04-26 12:12:56.627926] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:03.363 [2024-04-26 12:12:56.628015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 00:16:03.621 [2024-04-26 12:12:57.070056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.879 [2024-04-26 12:12:57.158534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.137 [2024-04-26 12:12:57.478597] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.137 [2024-04-26 12:12:57.510674] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:04.137 00:16:04.137 INFO: Checking if target configuration is the same... 00:16:04.137 12:12:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:04.137 12:12:57 -- common/autotest_common.sh@850 -- # return 0 00:16:04.137 12:12:57 -- json_config/common.sh@26 -- # echo '' 00:16:04.137 12:12:57 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:16:04.137 12:12:57 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:16:04.137 12:12:57 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:04.137 12:12:57 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:16:04.137 12:12:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:04.137 + '[' 2 -ne 2 ']' 00:16:04.137 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:16:04.137 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:16:04.396 + rootdir=/home/vagrant/spdk_repo/spdk 00:16:04.396 +++ basename /dev/fd/62 00:16:04.396 ++ mktemp /tmp/62.XXX 00:16:04.396 + tmp_file_1=/tmp/62.Tfa 00:16:04.396 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:04.396 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:04.396 + tmp_file_2=/tmp/spdk_tgt_config.json.pBh 00:16:04.396 + ret=0 00:16:04.396 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:04.654 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:04.654 + diff -u /tmp/62.Tfa /tmp/spdk_tgt_config.json.pBh 00:16:04.654 INFO: JSON config files are the same 00:16:04.654 + echo 'INFO: JSON config files are the same' 00:16:04.654 + rm /tmp/62.Tfa /tmp/spdk_tgt_config.json.pBh 00:16:04.654 + exit 0 00:16:04.654 INFO: changing configuration and checking if this can be detected... 00:16:04.654 12:12:58 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:16:04.654 12:12:58 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:16:04.654 12:12:58 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:04.654 12:12:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:04.913 12:12:58 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:04.913 12:12:58 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:16:04.913 12:12:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:04.913 + '[' 2 -ne 2 ']' 00:16:04.913 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:16:04.913 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:16:04.913 + rootdir=/home/vagrant/spdk_repo/spdk 00:16:04.913 +++ basename /dev/fd/62 00:16:04.913 ++ mktemp /tmp/62.XXX 00:16:04.913 + tmp_file_1=/tmp/62.pjQ 00:16:04.913 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:04.913 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:04.913 + tmp_file_2=/tmp/spdk_tgt_config.json.dyD 00:16:04.913 + ret=0 00:16:04.913 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:05.480 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:05.480 + diff -u /tmp/62.pjQ /tmp/spdk_tgt_config.json.dyD 00:16:05.480 + ret=1 00:16:05.480 + echo '=== Start of file: /tmp/62.pjQ ===' 00:16:05.480 + cat /tmp/62.pjQ 00:16:05.480 + echo '=== End of file: /tmp/62.pjQ ===' 00:16:05.480 + echo '' 00:16:05.480 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dyD ===' 00:16:05.480 + cat /tmp/spdk_tgt_config.json.dyD 00:16:05.480 + echo '=== End of file: /tmp/spdk_tgt_config.json.dyD ===' 00:16:05.480 + echo '' 00:16:05.480 + rm /tmp/62.pjQ /tmp/spdk_tgt_config.json.dyD 00:16:05.480 + exit 1 00:16:05.480 INFO: configuration change detected. 00:16:05.480 12:12:58 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:16:05.480 12:12:58 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:16:05.480 12:12:58 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:16:05.480 12:12:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:05.480 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 12:12:58 -- json_config/json_config.sh@307 -- # local ret=0 00:16:05.480 12:12:58 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:16:05.480 12:12:58 -- json_config/json_config.sh@317 -- # [[ -n 59080 ]] 00:16:05.480 12:12:58 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:16:05.480 12:12:58 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:16:05.480 12:12:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:05.480 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 12:12:58 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:16:05.480 12:12:58 -- json_config/json_config.sh@193 -- # uname -s 00:16:05.480 12:12:58 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:16:05.480 12:12:58 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:16:05.480 12:12:58 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:16:05.480 12:12:58 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:16:05.480 12:12:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:05.480 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 12:12:58 -- json_config/json_config.sh@323 -- # killprocess 59080 00:16:05.480 12:12:58 -- common/autotest_common.sh@936 -- # '[' -z 59080 ']' 00:16:05.480 12:12:58 -- common/autotest_common.sh@940 -- # kill -0 59080 00:16:05.480 12:12:58 -- common/autotest_common.sh@941 -- # uname 00:16:05.480 12:12:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.480 12:12:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59080 00:16:05.480 killing process with pid 59080 00:16:05.480 12:12:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.480 12:12:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.480 12:12:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59080' 00:16:05.480 12:12:58 -- common/autotest_common.sh@955 -- # kill 59080 00:16:05.480 12:12:58 -- common/autotest_common.sh@960 -- # wait 59080 00:16:05.739 12:12:59 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:05.739 12:12:59 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:16:05.739 12:12:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:05.739 12:12:59 -- common/autotest_common.sh@10 -- # set +x 00:16:06.062 INFO: Success 00:16:06.062 12:12:59 -- json_config/json_config.sh@328 -- # return 0 00:16:06.062 12:12:59 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:16:06.062 00:16:06.062 real 0m8.363s 00:16:06.062 user 0m11.968s 00:16:06.062 sys 0m1.791s 00:16:06.062 ************************************ 00:16:06.062 END TEST json_config 00:16:06.062 ************************************ 00:16:06.062 12:12:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.062 12:12:59 -- common/autotest_common.sh@10 -- # set +x 00:16:06.062 12:12:59 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:06.062 12:12:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:06.062 12:12:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.062 12:12:59 -- common/autotest_common.sh@10 -- # set +x 00:16:06.062 ************************************ 00:16:06.062 START TEST json_config_extra_key 00:16:06.062 ************************************ 00:16:06.062 12:12:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.062 12:12:59 -- nvmf/common.sh@7 -- # uname -s 00:16:06.062 12:12:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.062 12:12:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.062 12:12:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.062 12:12:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.062 12:12:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.062 12:12:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.062 12:12:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.062 12:12:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.062 12:12:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.062 12:12:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.062 12:12:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:16:06.062 12:12:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:16:06.062 12:12:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.062 12:12:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.062 12:12:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:06.062 12:12:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.062 12:12:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.062 12:12:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.062 12:12:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.062 12:12:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.062 12:12:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.062 12:12:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.062 12:12:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.062 12:12:59 -- paths/export.sh@5 -- # export PATH 00:16:06.062 12:12:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.062 12:12:59 -- nvmf/common.sh@47 -- # : 0 00:16:06.062 12:12:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.062 12:12:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.062 12:12:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.062 12:12:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.062 12:12:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.062 12:12:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.062 12:12:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.062 12:12:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:16:06.062 INFO: launching applications... 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:16:06.062 12:12:59 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:06.062 12:12:59 -- json_config/common.sh@9 -- # local app=target 00:16:06.062 12:12:59 -- json_config/common.sh@10 -- # shift 00:16:06.062 12:12:59 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:06.062 12:12:59 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:06.062 12:12:59 -- json_config/common.sh@15 -- # local app_extra_params= 00:16:06.062 12:12:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:06.062 12:12:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:06.062 12:12:59 -- json_config/common.sh@22 -- # app_pid["$app"]=59226 00:16:06.062 12:12:59 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:06.062 Waiting for target to run... 00:16:06.062 12:12:59 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:06.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:06.062 12:12:59 -- json_config/common.sh@25 -- # waitforlisten 59226 /var/tmp/spdk_tgt.sock 00:16:06.062 12:12:59 -- common/autotest_common.sh@817 -- # '[' -z 59226 ']' 00:16:06.063 12:12:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:06.063 12:12:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.063 12:12:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:06.063 12:12:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.063 12:12:59 -- common/autotest_common.sh@10 -- # set +x 00:16:06.063 [2024-04-26 12:12:59.502820] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:06.063 [2024-04-26 12:12:59.502935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:16:06.630 [2024-04-26 12:12:59.963443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.630 [2024-04-26 12:13:00.058192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.196 00:16:07.196 INFO: shutting down applications... 00:16:07.196 12:13:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:07.196 12:13:00 -- common/autotest_common.sh@850 -- # return 0 00:16:07.196 12:13:00 -- json_config/common.sh@26 -- # echo '' 00:16:07.196 12:13:00 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:16:07.196 12:13:00 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:16:07.196 12:13:00 -- json_config/common.sh@31 -- # local app=target 00:16:07.196 12:13:00 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:07.196 12:13:00 -- json_config/common.sh@35 -- # [[ -n 59226 ]] 00:16:07.196 12:13:00 -- json_config/common.sh@38 -- # kill -SIGINT 59226 00:16:07.196 12:13:00 -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:07.196 12:13:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:07.196 12:13:00 -- json_config/common.sh@41 -- # kill -0 59226 00:16:07.196 12:13:00 -- json_config/common.sh@45 -- # sleep 0.5 00:16:07.762 12:13:01 -- json_config/common.sh@40 -- # (( i++ )) 00:16:07.762 12:13:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:07.762 12:13:01 -- json_config/common.sh@41 -- # kill -0 59226 00:16:07.762 12:13:01 -- json_config/common.sh@45 -- # sleep 0.5 00:16:08.328 12:13:01 -- json_config/common.sh@40 -- # (( i++ )) 00:16:08.328 12:13:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:08.328 12:13:01 -- json_config/common.sh@41 -- # kill -0 59226 00:16:08.328 12:13:01 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:08.328 12:13:01 -- json_config/common.sh@43 -- # break 00:16:08.328 12:13:01 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:08.328 SPDK target shutdown done 00:16:08.328 Success 00:16:08.328 12:13:01 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:08.328 12:13:01 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:16:08.328 ************************************ 00:16:08.328 END TEST json_config_extra_key 00:16:08.328 ************************************ 00:16:08.328 00:16:08.328 real 0m2.200s 00:16:08.328 user 0m1.690s 00:16:08.328 sys 0m0.459s 00:16:08.328 12:13:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:08.328 12:13:01 -- common/autotest_common.sh@10 -- # set +x 00:16:08.328 12:13:01 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:08.328 12:13:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:08.328 12:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.328 12:13:01 -- common/autotest_common.sh@10 -- # set +x 00:16:08.328 ************************************ 00:16:08.328 START TEST alias_rpc 00:16:08.328 ************************************ 00:16:08.328 12:13:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:08.328 * Looking for test storage... 00:16:08.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:16:08.328 12:13:01 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:08.328 12:13:01 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59302 00:16:08.328 12:13:01 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.328 12:13:01 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59302 00:16:08.328 12:13:01 -- common/autotest_common.sh@817 -- # '[' -z 59302 ']' 00:16:08.328 12:13:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.328 12:13:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:08.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.328 12:13:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.328 12:13:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:08.328 12:13:01 -- common/autotest_common.sh@10 -- # set +x 00:16:08.607 [2024-04-26 12:13:01.819359] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:08.607 [2024-04-26 12:13:01.819509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:16:08.607 [2024-04-26 12:13:01.962209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.866 [2024-04-26 12:13:02.114770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.439 12:13:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:09.439 12:13:02 -- common/autotest_common.sh@850 -- # return 0 00:16:09.439 12:13:02 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:16:09.728 12:13:03 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59302 00:16:09.728 12:13:03 -- common/autotest_common.sh@936 -- # '[' -z 59302 ']' 00:16:09.728 12:13:03 -- common/autotest_common.sh@940 -- # kill -0 59302 00:16:09.728 12:13:03 -- common/autotest_common.sh@941 -- # uname 00:16:09.728 12:13:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.728 12:13:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59302 00:16:09.728 killing process with pid 59302 00:16:09.728 12:13:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:09.728 12:13:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:09.728 12:13:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59302' 00:16:09.728 12:13:03 -- common/autotest_common.sh@955 -- # kill 59302 00:16:09.728 12:13:03 -- common/autotest_common.sh@960 -- # wait 59302 00:16:10.293 ************************************ 00:16:10.293 END TEST alias_rpc 00:16:10.293 ************************************ 00:16:10.293 00:16:10.293 real 0m1.975s 00:16:10.293 user 0m2.249s 00:16:10.293 sys 0m0.477s 00:16:10.293 12:13:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.293 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:16:10.293 12:13:03 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:16:10.293 12:13:03 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:10.293 12:13:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:10.293 12:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.293 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:16:10.550 ************************************ 00:16:10.550 START TEST spdkcli_tcp 00:16:10.550 ************************************ 00:16:10.551 12:13:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:10.551 * Looking for test storage... 00:16:10.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:10.551 12:13:03 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:10.551 12:13:03 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:10.551 12:13:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:10.551 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59385 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@27 -- # waitforlisten 59385 00:16:10.551 12:13:03 -- common/autotest_common.sh@817 -- # '[' -z 59385 ']' 00:16:10.551 12:13:03 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:10.551 12:13:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.551 12:13:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:10.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.551 12:13:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.551 12:13:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:10.551 12:13:03 -- common/autotest_common.sh@10 -- # set +x 00:16:10.551 [2024-04-26 12:13:03.917377] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:10.551 [2024-04-26 12:13:03.918130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:16:10.809 [2024-04-26 12:13:04.058786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.809 [2024-04-26 12:13:04.204002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.809 [2024-04-26 12:13:04.204018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.742 12:13:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:11.742 12:13:04 -- common/autotest_common.sh@850 -- # return 0 00:16:11.742 12:13:04 -- spdkcli/tcp.sh@31 -- # socat_pid=59405 00:16:11.742 12:13:04 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:11.742 12:13:04 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:11.742 [ 00:16:11.742 "bdev_malloc_delete", 00:16:11.742 "bdev_malloc_create", 00:16:11.742 "bdev_null_resize", 00:16:11.742 "bdev_null_delete", 00:16:11.742 "bdev_null_create", 00:16:11.742 "bdev_nvme_cuse_unregister", 00:16:11.742 "bdev_nvme_cuse_register", 00:16:11.742 "bdev_opal_new_user", 00:16:11.742 "bdev_opal_set_lock_state", 00:16:11.742 "bdev_opal_delete", 00:16:11.742 "bdev_opal_get_info", 00:16:11.742 "bdev_opal_create", 00:16:11.742 "bdev_nvme_opal_revert", 00:16:11.742 "bdev_nvme_opal_init", 00:16:11.742 "bdev_nvme_send_cmd", 00:16:11.742 "bdev_nvme_get_path_iostat", 00:16:11.742 "bdev_nvme_get_mdns_discovery_info", 00:16:11.742 "bdev_nvme_stop_mdns_discovery", 00:16:11.742 "bdev_nvme_start_mdns_discovery", 00:16:11.742 "bdev_nvme_set_multipath_policy", 00:16:11.742 "bdev_nvme_set_preferred_path", 00:16:11.742 "bdev_nvme_get_io_paths", 00:16:11.742 "bdev_nvme_remove_error_injection", 00:16:11.742 "bdev_nvme_add_error_injection", 00:16:11.742 "bdev_nvme_get_discovery_info", 00:16:11.742 "bdev_nvme_stop_discovery", 00:16:11.742 "bdev_nvme_start_discovery", 00:16:11.742 "bdev_nvme_get_controller_health_info", 00:16:11.742 "bdev_nvme_disable_controller", 00:16:11.742 "bdev_nvme_enable_controller", 00:16:11.742 "bdev_nvme_reset_controller", 00:16:11.742 "bdev_nvme_get_transport_statistics", 00:16:11.742 "bdev_nvme_apply_firmware", 00:16:11.742 "bdev_nvme_detach_controller", 00:16:11.742 "bdev_nvme_get_controllers", 00:16:11.742 "bdev_nvme_attach_controller", 00:16:11.742 "bdev_nvme_set_hotplug", 00:16:11.742 "bdev_nvme_set_options", 00:16:11.742 "bdev_passthru_delete", 00:16:11.742 "bdev_passthru_create", 00:16:11.742 "bdev_lvol_grow_lvstore", 00:16:11.742 "bdev_lvol_get_lvols", 00:16:11.742 "bdev_lvol_get_lvstores", 00:16:11.742 "bdev_lvol_delete", 00:16:11.742 "bdev_lvol_set_read_only", 00:16:11.743 "bdev_lvol_resize", 00:16:11.743 "bdev_lvol_decouple_parent", 00:16:11.743 "bdev_lvol_inflate", 00:16:11.743 "bdev_lvol_rename", 00:16:11.743 "bdev_lvol_clone_bdev", 00:16:11.743 "bdev_lvol_clone", 00:16:11.743 "bdev_lvol_snapshot", 00:16:11.743 "bdev_lvol_create", 00:16:11.743 "bdev_lvol_delete_lvstore", 00:16:11.743 "bdev_lvol_rename_lvstore", 00:16:11.743 "bdev_lvol_create_lvstore", 00:16:11.743 "bdev_raid_set_options", 00:16:11.743 "bdev_raid_remove_base_bdev", 00:16:11.743 "bdev_raid_add_base_bdev", 00:16:11.743 "bdev_raid_delete", 00:16:11.743 "bdev_raid_create", 00:16:11.743 "bdev_raid_get_bdevs", 00:16:11.743 "bdev_error_inject_error", 00:16:11.743 "bdev_error_delete", 00:16:11.743 "bdev_error_create", 00:16:11.743 "bdev_split_delete", 00:16:11.743 "bdev_split_create", 00:16:11.743 "bdev_delay_delete", 00:16:11.743 "bdev_delay_create", 00:16:11.743 "bdev_delay_update_latency", 00:16:11.743 "bdev_zone_block_delete", 00:16:11.743 "bdev_zone_block_create", 00:16:11.743 "blobfs_create", 00:16:11.743 "blobfs_detect", 00:16:11.743 "blobfs_set_cache_size", 00:16:11.743 "bdev_aio_delete", 00:16:11.743 "bdev_aio_rescan", 00:16:11.743 "bdev_aio_create", 00:16:11.743 "bdev_ftl_set_property", 00:16:11.743 "bdev_ftl_get_properties", 00:16:11.743 "bdev_ftl_get_stats", 00:16:11.743 "bdev_ftl_unmap", 00:16:11.743 "bdev_ftl_unload", 00:16:11.743 "bdev_ftl_delete", 00:16:11.743 "bdev_ftl_load", 00:16:11.743 "bdev_ftl_create", 00:16:11.743 "bdev_virtio_attach_controller", 00:16:11.743 "bdev_virtio_scsi_get_devices", 00:16:11.743 "bdev_virtio_detach_controller", 00:16:11.743 "bdev_virtio_blk_set_hotplug", 00:16:11.743 "bdev_iscsi_delete", 00:16:11.743 "bdev_iscsi_create", 00:16:11.743 "bdev_iscsi_set_options", 00:16:11.743 "bdev_uring_delete", 00:16:11.743 "bdev_uring_rescan", 00:16:11.743 "bdev_uring_create", 00:16:11.743 "accel_error_inject_error", 00:16:11.743 "ioat_scan_accel_module", 00:16:11.743 "dsa_scan_accel_module", 00:16:11.743 "iaa_scan_accel_module", 00:16:11.743 "keyring_file_remove_key", 00:16:11.743 "keyring_file_add_key", 00:16:11.743 "iscsi_get_histogram", 00:16:11.743 "iscsi_enable_histogram", 00:16:11.743 "iscsi_set_options", 00:16:11.743 "iscsi_get_auth_groups", 00:16:11.743 "iscsi_auth_group_remove_secret", 00:16:11.743 "iscsi_auth_group_add_secret", 00:16:11.743 "iscsi_delete_auth_group", 00:16:11.743 "iscsi_create_auth_group", 00:16:11.743 "iscsi_set_discovery_auth", 00:16:11.743 "iscsi_get_options", 00:16:11.743 "iscsi_target_node_request_logout", 00:16:11.743 "iscsi_target_node_set_redirect", 00:16:11.743 "iscsi_target_node_set_auth", 00:16:11.743 "iscsi_target_node_add_lun", 00:16:11.743 "iscsi_get_stats", 00:16:11.743 "iscsi_get_connections", 00:16:11.743 "iscsi_portal_group_set_auth", 00:16:11.743 "iscsi_start_portal_group", 00:16:11.743 "iscsi_delete_portal_group", 00:16:11.743 "iscsi_create_portal_group", 00:16:11.743 "iscsi_get_portal_groups", 00:16:11.743 "iscsi_delete_target_node", 00:16:11.743 "iscsi_target_node_remove_pg_ig_maps", 00:16:11.743 "iscsi_target_node_add_pg_ig_maps", 00:16:11.743 "iscsi_create_target_node", 00:16:11.743 "iscsi_get_target_nodes", 00:16:11.743 "iscsi_delete_initiator_group", 00:16:11.743 "iscsi_initiator_group_remove_initiators", 00:16:11.743 "iscsi_initiator_group_add_initiators", 00:16:11.743 "iscsi_create_initiator_group", 00:16:11.743 "iscsi_get_initiator_groups", 00:16:11.743 "nvmf_set_crdt", 00:16:11.743 "nvmf_set_config", 00:16:11.743 "nvmf_set_max_subsystems", 00:16:11.743 "nvmf_subsystem_get_listeners", 00:16:11.743 "nvmf_subsystem_get_qpairs", 00:16:11.743 "nvmf_subsystem_get_controllers", 00:16:11.743 "nvmf_get_stats", 00:16:11.743 "nvmf_get_transports", 00:16:11.743 "nvmf_create_transport", 00:16:11.743 "nvmf_get_targets", 00:16:11.743 "nvmf_delete_target", 00:16:11.743 "nvmf_create_target", 00:16:11.743 "nvmf_subsystem_allow_any_host", 00:16:11.743 "nvmf_subsystem_remove_host", 00:16:11.743 "nvmf_subsystem_add_host", 00:16:11.743 "nvmf_ns_remove_host", 00:16:11.743 "nvmf_ns_add_host", 00:16:11.743 "nvmf_subsystem_remove_ns", 00:16:11.743 "nvmf_subsystem_add_ns", 00:16:11.743 "nvmf_subsystem_listener_set_ana_state", 00:16:11.743 "nvmf_discovery_get_referrals", 00:16:11.743 "nvmf_discovery_remove_referral", 00:16:11.743 "nvmf_discovery_add_referral", 00:16:11.743 "nvmf_subsystem_remove_listener", 00:16:11.743 "nvmf_subsystem_add_listener", 00:16:11.743 "nvmf_delete_subsystem", 00:16:11.743 "nvmf_create_subsystem", 00:16:11.743 "nvmf_get_subsystems", 00:16:11.743 "env_dpdk_get_mem_stats", 00:16:11.743 "nbd_get_disks", 00:16:11.743 "nbd_stop_disk", 00:16:11.743 "nbd_start_disk", 00:16:11.743 "ublk_recover_disk", 00:16:11.743 "ublk_get_disks", 00:16:11.743 "ublk_stop_disk", 00:16:11.743 "ublk_start_disk", 00:16:11.743 "ublk_destroy_target", 00:16:11.743 "ublk_create_target", 00:16:11.743 "virtio_blk_create_transport", 00:16:11.743 "virtio_blk_get_transports", 00:16:11.743 "vhost_controller_set_coalescing", 00:16:11.743 "vhost_get_controllers", 00:16:11.743 "vhost_delete_controller", 00:16:11.743 "vhost_create_blk_controller", 00:16:11.743 "vhost_scsi_controller_remove_target", 00:16:11.743 "vhost_scsi_controller_add_target", 00:16:11.743 "vhost_start_scsi_controller", 00:16:11.743 "vhost_create_scsi_controller", 00:16:11.743 "thread_set_cpumask", 00:16:11.743 "framework_get_scheduler", 00:16:11.743 "framework_set_scheduler", 00:16:11.743 "framework_get_reactors", 00:16:11.743 "thread_get_io_channels", 00:16:11.743 "thread_get_pollers", 00:16:11.743 "thread_get_stats", 00:16:11.743 "framework_monitor_context_switch", 00:16:11.743 "spdk_kill_instance", 00:16:11.743 "log_enable_timestamps", 00:16:11.743 "log_get_flags", 00:16:11.743 "log_clear_flag", 00:16:11.743 "log_set_flag", 00:16:11.743 "log_get_level", 00:16:11.743 "log_set_level", 00:16:11.743 "log_get_print_level", 00:16:11.743 "log_set_print_level", 00:16:11.743 "framework_enable_cpumask_locks", 00:16:11.743 "framework_disable_cpumask_locks", 00:16:11.743 "framework_wait_init", 00:16:11.743 "framework_start_init", 00:16:11.743 "scsi_get_devices", 00:16:11.743 "bdev_get_histogram", 00:16:11.743 "bdev_enable_histogram", 00:16:11.743 "bdev_set_qos_limit", 00:16:11.743 "bdev_set_qd_sampling_period", 00:16:11.743 "bdev_get_bdevs", 00:16:11.743 "bdev_reset_iostat", 00:16:11.743 "bdev_get_iostat", 00:16:11.743 "bdev_examine", 00:16:11.743 "bdev_wait_for_examine", 00:16:11.743 "bdev_set_options", 00:16:11.743 "notify_get_notifications", 00:16:11.743 "notify_get_types", 00:16:11.743 "accel_get_stats", 00:16:11.743 "accel_set_options", 00:16:11.743 "accel_set_driver", 00:16:11.743 "accel_crypto_key_destroy", 00:16:11.743 "accel_crypto_keys_get", 00:16:11.743 "accel_crypto_key_create", 00:16:11.743 "accel_assign_opc", 00:16:11.743 "accel_get_module_info", 00:16:11.743 "accel_get_opc_assignments", 00:16:11.743 "vmd_rescan", 00:16:11.743 "vmd_remove_device", 00:16:11.743 "vmd_enable", 00:16:11.743 "sock_get_default_impl", 00:16:11.743 "sock_set_default_impl", 00:16:11.743 "sock_impl_set_options", 00:16:11.743 "sock_impl_get_options", 00:16:11.743 "iobuf_get_stats", 00:16:11.743 "iobuf_set_options", 00:16:11.743 "framework_get_pci_devices", 00:16:11.743 "framework_get_config", 00:16:11.743 "framework_get_subsystems", 00:16:11.743 "trace_get_info", 00:16:11.743 "trace_get_tpoint_group_mask", 00:16:11.743 "trace_disable_tpoint_group", 00:16:11.743 "trace_enable_tpoint_group", 00:16:11.743 "trace_clear_tpoint_mask", 00:16:11.743 "trace_set_tpoint_mask", 00:16:11.743 "keyring_get_keys", 00:16:11.743 "spdk_get_version", 00:16:11.743 "rpc_get_methods" 00:16:11.743 ] 00:16:11.743 12:13:05 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:11.743 12:13:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:11.743 12:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:12.001 12:13:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:12.001 12:13:05 -- spdkcli/tcp.sh@38 -- # killprocess 59385 00:16:12.001 12:13:05 -- common/autotest_common.sh@936 -- # '[' -z 59385 ']' 00:16:12.001 12:13:05 -- common/autotest_common.sh@940 -- # kill -0 59385 00:16:12.001 12:13:05 -- common/autotest_common.sh@941 -- # uname 00:16:12.001 12:13:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.001 12:13:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59385 00:16:12.001 12:13:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:12.002 killing process with pid 59385 00:16:12.002 12:13:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:12.002 12:13:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59385' 00:16:12.002 12:13:05 -- common/autotest_common.sh@955 -- # kill 59385 00:16:12.002 12:13:05 -- common/autotest_common.sh@960 -- # wait 59385 00:16:12.568 00:16:12.568 real 0m1.995s 00:16:12.568 user 0m3.653s 00:16:12.568 sys 0m0.515s 00:16:12.568 ************************************ 00:16:12.568 END TEST spdkcli_tcp 00:16:12.568 ************************************ 00:16:12.568 12:13:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:12.568 12:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 12:13:05 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:12.568 12:13:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:12.568 12:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.568 12:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 ************************************ 00:16:12.568 START TEST dpdk_mem_utility 00:16:12.568 ************************************ 00:16:12.568 12:13:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:12.568 * Looking for test storage... 00:16:12.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:12.568 12:13:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:12.568 12:13:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59485 00:16:12.568 12:13:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.568 12:13:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59485 00:16:12.568 12:13:05 -- common/autotest_common.sh@817 -- # '[' -z 59485 ']' 00:16:12.568 12:13:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.568 12:13:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.568 12:13:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.569 12:13:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.569 12:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:12.826 [2024-04-26 12:13:06.040467] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:12.826 [2024-04-26 12:13:06.041520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59485 ] 00:16:12.826 [2024-04-26 12:13:06.181617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.128 [2024-04-26 12:13:06.300879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.695 12:13:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.695 12:13:07 -- common/autotest_common.sh@850 -- # return 0 00:16:13.695 12:13:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:13.695 12:13:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:13.695 12:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.695 12:13:07 -- common/autotest_common.sh@10 -- # set +x 00:16:13.695 { 00:16:13.695 "filename": "/tmp/spdk_mem_dump.txt" 00:16:13.695 } 00:16:13.695 12:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.695 12:13:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:13.695 DPDK memory size 814.000000 MiB in 1 heap(s) 00:16:13.695 1 heaps totaling size 814.000000 MiB 00:16:13.695 size: 814.000000 MiB heap id: 0 00:16:13.695 end heaps---------- 00:16:13.695 8 mempools totaling size 598.116089 MiB 00:16:13.695 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:13.695 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:13.695 size: 84.521057 MiB name: bdev_io_59485 00:16:13.695 size: 51.011292 MiB name: evtpool_59485 00:16:13.695 size: 50.003479 MiB name: msgpool_59485 00:16:13.695 size: 21.763794 MiB name: PDU_Pool 00:16:13.695 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:13.695 size: 0.026123 MiB name: Session_Pool 00:16:13.695 end mempools------- 00:16:13.695 6 memzones totaling size 4.142822 MiB 00:16:13.695 size: 1.000366 MiB name: RG_ring_0_59485 00:16:13.695 size: 1.000366 MiB name: RG_ring_1_59485 00:16:13.695 size: 1.000366 MiB name: RG_ring_4_59485 00:16:13.695 size: 1.000366 MiB name: RG_ring_5_59485 00:16:13.695 size: 0.125366 MiB name: RG_ring_2_59485 00:16:13.695 size: 0.015991 MiB name: RG_ring_3_59485 00:16:13.695 end memzones------- 00:16:13.695 12:13:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:13.955 heap id: 0 total size: 814.000000 MiB number of busy elements: 304 number of free elements: 15 00:16:13.955 list of free elements. size: 12.471191 MiB 00:16:13.955 element at address: 0x200000400000 with size: 1.999512 MiB 00:16:13.955 element at address: 0x200018e00000 with size: 0.999878 MiB 00:16:13.955 element at address: 0x200019000000 with size: 0.999878 MiB 00:16:13.955 element at address: 0x200003e00000 with size: 0.996277 MiB 00:16:13.955 element at address: 0x200031c00000 with size: 0.994446 MiB 00:16:13.955 element at address: 0x200013800000 with size: 0.978699 MiB 00:16:13.955 element at address: 0x200007000000 with size: 0.959839 MiB 00:16:13.955 element at address: 0x200019200000 with size: 0.936584 MiB 00:16:13.955 element at address: 0x200000200000 with size: 0.833191 MiB 00:16:13.955 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:16:13.955 element at address: 0x20000b200000 with size: 0.488892 MiB 00:16:13.955 element at address: 0x200000800000 with size: 0.486145 MiB 00:16:13.955 element at address: 0x200019400000 with size: 0.485657 MiB 00:16:13.955 element at address: 0x200027e00000 with size: 0.395752 MiB 00:16:13.955 element at address: 0x200003a00000 with size: 0.347839 MiB 00:16:13.955 list of standard malloc elements. size: 199.266235 MiB 00:16:13.955 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:16:13.955 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:16:13.955 element at address: 0x200018efff80 with size: 1.000122 MiB 00:16:13.955 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:16:13.956 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:16:13.956 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:16:13.956 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:16:13.956 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:16:13.956 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:16:13.956 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087c740 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087c800 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087c980 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59180 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59240 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59300 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59480 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59540 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59600 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59780 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59840 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59900 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003adb300 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003adb500 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003affa80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003affb40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:16:13.956 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:16:13.957 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e65500 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:16:13.957 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:16:13.957 list of memzone associated elements. size: 602.262573 MiB 00:16:13.957 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:16:13.957 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:13.958 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:16:13.958 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:13.958 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:16:13.958 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59485_0 00:16:13.958 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:16:13.958 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59485_0 00:16:13.958 element at address: 0x200003fff380 with size: 48.003052 MiB 00:16:13.958 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59485_0 00:16:13.958 element at address: 0x2000195be940 with size: 20.255554 MiB 00:16:13.958 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:13.958 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:16:13.958 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:13.958 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:16:13.958 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59485 00:16:13.958 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:16:13.958 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59485 00:16:13.958 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:16:13.958 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59485 00:16:13.958 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:16:13.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:13.958 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:16:13.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:13.958 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:16:13.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:13.958 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:16:13.958 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:13.958 element at address: 0x200003eff180 with size: 1.000488 MiB 00:16:13.958 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59485 00:16:13.958 element at address: 0x200003affc00 with size: 1.000488 MiB 00:16:13.958 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59485 00:16:13.958 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:16:13.958 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59485 00:16:13.958 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:16:13.958 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59485 00:16:13.958 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:16:13.958 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59485 00:16:13.958 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:16:13.958 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:13.958 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:16:13.958 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:13.958 element at address: 0x20001947c540 with size: 0.250488 MiB 00:16:13.958 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:13.958 element at address: 0x200003adf880 with size: 0.125488 MiB 00:16:13.958 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59485 00:16:13.958 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:16:13.958 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:13.958 element at address: 0x200027e65680 with size: 0.023743 MiB 00:16:13.958 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:13.958 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:16:13.958 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59485 00:16:13.958 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:16:13.958 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:13.958 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:16:13.958 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59485 00:16:13.958 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:16:13.958 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59485 00:16:13.958 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:16:13.958 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:13.958 12:13:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:13.958 12:13:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59485 00:16:13.958 12:13:07 -- common/autotest_common.sh@936 -- # '[' -z 59485 ']' 00:16:13.958 12:13:07 -- common/autotest_common.sh@940 -- # kill -0 59485 00:16:13.958 12:13:07 -- common/autotest_common.sh@941 -- # uname 00:16:13.958 12:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.958 12:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59485 00:16:13.958 12:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.958 12:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.958 killing process with pid 59485 00:16:13.958 12:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59485' 00:16:13.958 12:13:07 -- common/autotest_common.sh@955 -- # kill 59485 00:16:13.958 12:13:07 -- common/autotest_common.sh@960 -- # wait 59485 00:16:14.525 00:16:14.525 real 0m1.809s 00:16:14.525 user 0m1.957s 00:16:14.525 sys 0m0.472s 00:16:14.525 12:13:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:14.525 ************************************ 00:16:14.525 END TEST dpdk_mem_utility 00:16:14.525 ************************************ 00:16:14.525 12:13:07 -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 12:13:07 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:14.525 12:13:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:14.525 12:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.525 12:13:07 -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 ************************************ 00:16:14.525 START TEST event 00:16:14.525 ************************************ 00:16:14.525 12:13:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:14.525 * Looking for test storage... 00:16:14.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:14.525 12:13:07 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:14.525 12:13:07 -- bdev/nbd_common.sh@6 -- # set -e 00:16:14.525 12:13:07 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:14.525 12:13:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:14.525 12:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.525 12:13:07 -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 ************************************ 00:16:14.525 START TEST event_perf 00:16:14.525 ************************************ 00:16:14.525 12:13:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:14.525 Running I/O for 1 seconds...[2024-04-26 12:13:07.992686] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:14.525 [2024-04-26 12:13:07.992784] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59566 ] 00:16:14.783 [2024-04-26 12:13:08.130625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.041 [2024-04-26 12:13:08.255665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.041 [2024-04-26 12:13:08.255812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.041 Running I/O for 1 seconds...[2024-04-26 12:13:08.255931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.042 [2024-04-26 12:13:08.255932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.977 00:16:15.977 lcore 0: 189678 00:16:15.977 lcore 1: 189679 00:16:15.977 lcore 2: 189677 00:16:15.977 lcore 3: 189679 00:16:15.977 done. 00:16:15.977 00:16:15.977 real 0m1.404s 00:16:15.977 user 0m4.218s 00:16:15.977 sys 0m0.063s 00:16:15.977 12:13:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:15.977 12:13:09 -- common/autotest_common.sh@10 -- # set +x 00:16:15.977 ************************************ 00:16:15.977 END TEST event_perf 00:16:15.977 ************************************ 00:16:15.977 12:13:09 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:15.977 12:13:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:15.977 12:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.977 12:13:09 -- common/autotest_common.sh@10 -- # set +x 00:16:16.235 ************************************ 00:16:16.235 START TEST event_reactor 00:16:16.235 ************************************ 00:16:16.235 12:13:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:16.235 [2024-04-26 12:13:09.511302] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:16.235 [2024-04-26 12:13:09.511428] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:16:16.235 [2024-04-26 12:13:09.647058] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.493 [2024-04-26 12:13:09.767701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.425 test_start 00:16:17.425 oneshot 00:16:17.425 tick 100 00:16:17.425 tick 100 00:16:17.425 tick 250 00:16:17.425 tick 100 00:16:17.425 tick 100 00:16:17.425 tick 100 00:16:17.425 tick 250 00:16:17.425 tick 500 00:16:17.425 tick 100 00:16:17.425 tick 100 00:16:17.425 tick 250 00:16:17.425 tick 100 00:16:17.425 tick 100 00:16:17.425 test_end 00:16:17.425 00:16:17.425 real 0m1.376s 00:16:17.425 user 0m1.210s 00:16:17.425 sys 0m0.060s 00:16:17.425 12:13:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:17.425 12:13:10 -- common/autotest_common.sh@10 -- # set +x 00:16:17.425 ************************************ 00:16:17.425 END TEST event_reactor 00:16:17.425 ************************************ 00:16:17.683 12:13:10 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:17.684 12:13:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:17.684 12:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.684 12:13:10 -- common/autotest_common.sh@10 -- # set +x 00:16:17.684 ************************************ 00:16:17.684 START TEST event_reactor_perf 00:16:17.684 ************************************ 00:16:17.684 12:13:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:17.684 [2024-04-26 12:13:11.000886] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:17.684 [2024-04-26 12:13:11.000980] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:16:17.684 [2024-04-26 12:13:11.132990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.941 [2024-04-26 12:13:11.249276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.317 test_start 00:16:19.317 test_end 00:16:19.317 Performance: 370029 events per second 00:16:19.317 00:16:19.317 real 0m1.377s 00:16:19.317 user 0m1.218s 00:16:19.317 sys 0m0.052s 00:16:19.317 12:13:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.317 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.317 ************************************ 00:16:19.317 END TEST event_reactor_perf 00:16:19.317 ************************************ 00:16:19.317 12:13:12 -- event/event.sh@49 -- # uname -s 00:16:19.317 12:13:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:19.317 12:13:12 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:19.317 12:13:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:19.317 12:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.317 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.317 ************************************ 00:16:19.317 START TEST event_scheduler 00:16:19.317 ************************************ 00:16:19.317 12:13:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:19.317 * Looking for test storage... 00:16:19.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:19.317 12:13:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:19.317 12:13:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=59716 00:16:19.317 12:13:12 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:19.317 12:13:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:19.317 12:13:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 59716 00:16:19.317 12:13:12 -- common/autotest_common.sh@817 -- # '[' -z 59716 ']' 00:16:19.317 12:13:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.317 12:13:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:19.317 12:13:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.317 12:13:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:19.317 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.317 [2024-04-26 12:13:12.616857] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:19.317 [2024-04-26 12:13:12.616981] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59716 ] 00:16:19.317 [2024-04-26 12:13:12.755524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.574 [2024-04-26 12:13:12.875615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.574 [2024-04-26 12:13:12.875794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.574 [2024-04-26 12:13:12.875902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.574 [2024-04-26 12:13:12.876034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.139 12:13:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:20.139 12:13:13 -- common/autotest_common.sh@850 -- # return 0 00:16:20.139 12:13:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:20.139 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.139 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.139 POWER: Env isn't set yet! 00:16:20.139 POWER: Attempting to initialise ACPI cpufreq power management... 00:16:20.139 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.139 POWER: Cannot set governor of lcore 0 to userspace 00:16:20.139 POWER: Attempting to initialise PSTAT power management... 00:16:20.139 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.139 POWER: Cannot set governor of lcore 0 to performance 00:16:20.139 POWER: Attempting to initialise AMD PSTATE power management... 00:16:20.139 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.139 POWER: Cannot set governor of lcore 0 to userspace 00:16:20.139 POWER: Attempting to initialise CPPC power management... 00:16:20.139 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:20.139 POWER: Cannot set governor of lcore 0 to userspace 00:16:20.139 POWER: Attempting to initialise VM power management... 00:16:20.139 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:20.139 POWER: Unable to set Power Management Environment for lcore 0 00:16:20.139 [2024-04-26 12:13:13.601620] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:16:20.139 [2024-04-26 12:13:13.601635] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:16:20.139 [2024-04-26 12:13:13.601644] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:16:20.139 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.139 12:13:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:20.139 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.139 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.395 [2024-04-26 12:13:13.707106] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:20.395 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.395 12:13:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:20.395 12:13:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:20.395 12:13:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 ************************************ 00:16:20.396 START TEST scheduler_create_thread 00:16:20.396 ************************************ 00:16:20.396 12:13:13 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 2 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 3 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 4 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 5 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 6 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 7 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 8 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 9 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.396 10 00:16:20.396 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.396 12:13:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:20.396 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.396 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.653 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.653 12:13:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:20.653 12:13:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:20.653 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.653 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.653 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.653 12:13:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:20.653 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.653 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.653 12:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.653 12:13:13 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:20.653 12:13:13 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:20.653 12:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.653 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:16:21.615 12:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.615 00:16:21.615 real 0m1.171s 00:16:21.615 user 0m0.017s 00:16:21.615 sys 0m0.003s 00:16:21.615 12:13:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.615 12:13:14 -- common/autotest_common.sh@10 -- # set +x 00:16:21.615 ************************************ 00:16:21.615 END TEST scheduler_create_thread 00:16:21.615 ************************************ 00:16:21.615 12:13:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:21.615 12:13:14 -- scheduler/scheduler.sh@46 -- # killprocess 59716 00:16:21.615 12:13:14 -- common/autotest_common.sh@936 -- # '[' -z 59716 ']' 00:16:21.615 12:13:14 -- common/autotest_common.sh@940 -- # kill -0 59716 00:16:21.615 12:13:14 -- common/autotest_common.sh@941 -- # uname 00:16:21.615 12:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.615 12:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59716 00:16:21.615 12:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:21.615 12:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:21.615 killing process with pid 59716 00:16:21.615 12:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59716' 00:16:21.615 12:13:15 -- common/autotest_common.sh@955 -- # kill 59716 00:16:21.615 12:13:15 -- common/autotest_common.sh@960 -- # wait 59716 00:16:22.180 [2024-04-26 12:13:15.429944] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:22.437 00:16:22.437 real 0m3.219s 00:16:22.437 user 0m5.716s 00:16:22.437 sys 0m0.429s 00:16:22.437 12:13:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.437 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:16:22.437 ************************************ 00:16:22.437 END TEST event_scheduler 00:16:22.437 ************************************ 00:16:22.437 12:13:15 -- event/event.sh@51 -- # modprobe -n nbd 00:16:22.437 12:13:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:22.437 12:13:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:22.437 12:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.437 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:16:22.437 ************************************ 00:16:22.437 START TEST app_repeat 00:16:22.437 ************************************ 00:16:22.437 12:13:15 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:16:22.437 12:13:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.437 12:13:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.437 12:13:15 -- event/event.sh@13 -- # local nbd_list 00:16:22.437 12:13:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:22.437 12:13:15 -- event/event.sh@14 -- # local bdev_list 00:16:22.437 12:13:15 -- event/event.sh@15 -- # local repeat_times=4 00:16:22.437 12:13:15 -- event/event.sh@17 -- # modprobe nbd 00:16:22.437 12:13:15 -- event/event.sh@19 -- # repeat_pid=59813 00:16:22.437 12:13:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:22.437 Process app_repeat pid: 59813 00:16:22.437 12:13:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59813' 00:16:22.437 12:13:15 -- event/event.sh@23 -- # for i in {0..2} 00:16:22.437 spdk_app_start Round 0 00:16:22.437 12:13:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:22.437 12:13:15 -- event/event.sh@25 -- # waitforlisten 59813 /var/tmp/spdk-nbd.sock 00:16:22.437 12:13:15 -- common/autotest_common.sh@817 -- # '[' -z 59813 ']' 00:16:22.437 12:13:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:22.437 12:13:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:22.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:22.437 12:13:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:22.437 12:13:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:22.437 12:13:15 -- common/autotest_common.sh@10 -- # set +x 00:16:22.437 12:13:15 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:22.437 [2024-04-26 12:13:15.862739] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:22.437 [2024-04-26 12:13:15.862865] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:16:22.694 [2024-04-26 12:13:16.004845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:22.694 [2024-04-26 12:13:16.124012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.694 [2024-04-26 12:13:16.124027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.627 12:13:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:23.627 12:13:16 -- common/autotest_common.sh@850 -- # return 0 00:16:23.627 12:13:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:23.885 Malloc0 00:16:23.885 12:13:17 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:24.143 Malloc1 00:16:24.143 12:13:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@12 -- # local i 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.143 12:13:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:24.401 /dev/nbd0 00:16:24.401 12:13:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:24.401 12:13:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:24.401 12:13:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:24.401 12:13:17 -- common/autotest_common.sh@855 -- # local i 00:16:24.401 12:13:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:24.401 12:13:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:24.401 12:13:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:24.401 12:13:17 -- common/autotest_common.sh@859 -- # break 00:16:24.401 12:13:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:24.401 12:13:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:24.401 12:13:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:24.401 1+0 records in 00:16:24.401 1+0 records out 00:16:24.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262087 s, 15.6 MB/s 00:16:24.401 12:13:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:24.401 12:13:17 -- common/autotest_common.sh@872 -- # size=4096 00:16:24.401 12:13:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:24.401 12:13:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:24.401 12:13:17 -- common/autotest_common.sh@875 -- # return 0 00:16:24.401 12:13:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.401 12:13:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.401 12:13:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:24.658 /dev/nbd1 00:16:24.658 12:13:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:24.658 12:13:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:24.658 12:13:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:24.658 12:13:18 -- common/autotest_common.sh@855 -- # local i 00:16:24.658 12:13:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:24.658 12:13:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:24.658 12:13:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:24.658 12:13:18 -- common/autotest_common.sh@859 -- # break 00:16:24.658 12:13:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:24.658 12:13:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:24.658 12:13:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:24.658 1+0 records in 00:16:24.658 1+0 records out 00:16:24.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357231 s, 11.5 MB/s 00:16:24.659 12:13:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:24.659 12:13:18 -- common/autotest_common.sh@872 -- # size=4096 00:16:24.659 12:13:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:24.659 12:13:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:24.659 12:13:18 -- common/autotest_common.sh@875 -- # return 0 00:16:24.659 12:13:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.659 12:13:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.659 12:13:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:24.659 12:13:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:24.659 12:13:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:24.916 12:13:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:24.916 { 00:16:24.916 "nbd_device": "/dev/nbd0", 00:16:24.916 "bdev_name": "Malloc0" 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "nbd_device": "/dev/nbd1", 00:16:24.916 "bdev_name": "Malloc1" 00:16:24.916 } 00:16:24.916 ]' 00:16:24.916 12:13:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:24.916 { 00:16:24.916 "nbd_device": "/dev/nbd0", 00:16:24.916 "bdev_name": "Malloc0" 00:16:24.916 }, 00:16:24.916 { 00:16:24.916 "nbd_device": "/dev/nbd1", 00:16:24.916 "bdev_name": "Malloc1" 00:16:24.916 } 00:16:24.916 ]' 00:16:24.916 12:13:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:24.916 12:13:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:24.916 /dev/nbd1' 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:24.917 /dev/nbd1' 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@65 -- # count=2 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@95 -- # count=2 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:24.917 256+0 records in 00:16:24.917 256+0 records out 00:16:24.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734085 s, 143 MB/s 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:24.917 256+0 records in 00:16:24.917 256+0 records out 00:16:24.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243334 s, 43.1 MB/s 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:24.917 12:13:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:25.175 256+0 records in 00:16:25.175 256+0 records out 00:16:25.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244892 s, 42.8 MB/s 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@51 -- # local i 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.175 12:13:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@41 -- # break 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.433 12:13:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@41 -- # break 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.695 12:13:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:25.695 12:13:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:25.695 12:13:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@65 -- # true 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@65 -- # count=0 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@104 -- # count=0 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:25.953 12:13:19 -- bdev/nbd_common.sh@109 -- # return 0 00:16:25.953 12:13:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:26.211 12:13:19 -- event/event.sh@35 -- # sleep 3 00:16:26.468 [2024-04-26 12:13:19.886375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.727 [2024-04-26 12:13:20.004695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.727 [2024-04-26 12:13:20.004706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.727 [2024-04-26 12:13:20.068496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:26.727 [2024-04-26 12:13:20.068584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:29.399 spdk_app_start Round 1 00:16:29.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:29.399 12:13:22 -- event/event.sh@23 -- # for i in {0..2} 00:16:29.399 12:13:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:29.399 12:13:22 -- event/event.sh@25 -- # waitforlisten 59813 /var/tmp/spdk-nbd.sock 00:16:29.399 12:13:22 -- common/autotest_common.sh@817 -- # '[' -z 59813 ']' 00:16:29.399 12:13:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:29.399 12:13:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:29.399 12:13:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:29.399 12:13:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:29.399 12:13:22 -- common/autotest_common.sh@10 -- # set +x 00:16:29.658 12:13:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:29.658 12:13:22 -- common/autotest_common.sh@850 -- # return 0 00:16:29.658 12:13:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:29.915 Malloc0 00:16:29.915 12:13:23 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:30.179 Malloc1 00:16:30.179 12:13:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@12 -- # local i 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.179 12:13:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:30.437 /dev/nbd0 00:16:30.437 12:13:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.437 12:13:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.437 12:13:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:30.437 12:13:23 -- common/autotest_common.sh@855 -- # local i 00:16:30.437 12:13:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:30.437 12:13:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:30.437 12:13:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:30.437 12:13:23 -- common/autotest_common.sh@859 -- # break 00:16:30.437 12:13:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:30.437 12:13:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:30.437 12:13:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:30.437 1+0 records in 00:16:30.437 1+0 records out 00:16:30.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370252 s, 11.1 MB/s 00:16:30.437 12:13:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:30.437 12:13:23 -- common/autotest_common.sh@872 -- # size=4096 00:16:30.438 12:13:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:30.438 12:13:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:30.438 12:13:23 -- common/autotest_common.sh@875 -- # return 0 00:16:30.438 12:13:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.438 12:13:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.438 12:13:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:30.696 /dev/nbd1 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:30.696 12:13:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:30.696 12:13:24 -- common/autotest_common.sh@855 -- # local i 00:16:30.696 12:13:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:30.696 12:13:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:30.696 12:13:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:30.696 12:13:24 -- common/autotest_common.sh@859 -- # break 00:16:30.696 12:13:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:30.696 12:13:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:30.696 12:13:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:30.696 1+0 records in 00:16:30.696 1+0 records out 00:16:30.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388376 s, 10.5 MB/s 00:16:30.696 12:13:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:30.696 12:13:24 -- common/autotest_common.sh@872 -- # size=4096 00:16:30.696 12:13:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:30.696 12:13:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:30.696 12:13:24 -- common/autotest_common.sh@875 -- # return 0 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.696 12:13:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:30.954 12:13:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:30.954 { 00:16:30.954 "nbd_device": "/dev/nbd0", 00:16:30.954 "bdev_name": "Malloc0" 00:16:30.954 }, 00:16:30.954 { 00:16:30.954 "nbd_device": "/dev/nbd1", 00:16:30.954 "bdev_name": "Malloc1" 00:16:30.954 } 00:16:30.954 ]' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:31.212 { 00:16:31.212 "nbd_device": "/dev/nbd0", 00:16:31.212 "bdev_name": "Malloc0" 00:16:31.212 }, 00:16:31.212 { 00:16:31.212 "nbd_device": "/dev/nbd1", 00:16:31.212 "bdev_name": "Malloc1" 00:16:31.212 } 00:16:31.212 ]' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:31.212 /dev/nbd1' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:31.212 /dev/nbd1' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@65 -- # count=2 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@95 -- # count=2 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:31.212 256+0 records in 00:16:31.212 256+0 records out 00:16:31.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671133 s, 156 MB/s 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:31.212 256+0 records in 00:16:31.212 256+0 records out 00:16:31.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243664 s, 43.0 MB/s 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:31.212 256+0 records in 00:16:31.212 256+0 records out 00:16:31.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253938 s, 41.3 MB/s 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@51 -- # local i 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.212 12:13:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@41 -- # break 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.469 12:13:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@41 -- # break 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.727 12:13:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:31.985 12:13:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:31.985 12:13:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:31.985 12:13:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@65 -- # true 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@65 -- # count=0 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@104 -- # count=0 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:32.246 12:13:25 -- bdev/nbd_common.sh@109 -- # return 0 00:16:32.246 12:13:25 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:32.514 12:13:25 -- event/event.sh@35 -- # sleep 3 00:16:32.787 [2024-04-26 12:13:26.020827] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.787 [2024-04-26 12:13:26.126757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.787 [2024-04-26 12:13:26.126762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.787 [2024-04-26 12:13:26.189445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:32.787 [2024-04-26 12:13:26.189549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:35.366 spdk_app_start Round 2 00:16:35.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:35.366 12:13:28 -- event/event.sh@23 -- # for i in {0..2} 00:16:35.366 12:13:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:35.366 12:13:28 -- event/event.sh@25 -- # waitforlisten 59813 /var/tmp/spdk-nbd.sock 00:16:35.366 12:13:28 -- common/autotest_common.sh@817 -- # '[' -z 59813 ']' 00:16:35.366 12:13:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:35.366 12:13:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:35.366 12:13:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:35.366 12:13:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:35.366 12:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:35.625 12:13:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.625 12:13:29 -- common/autotest_common.sh@850 -- # return 0 00:16:35.625 12:13:29 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:35.884 Malloc0 00:16:36.143 12:13:29 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:36.401 Malloc1 00:16:36.401 12:13:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:36.401 12:13:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@12 -- # local i 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.402 12:13:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:36.402 /dev/nbd0 00:16:36.660 12:13:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.660 12:13:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.660 12:13:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:36.660 12:13:29 -- common/autotest_common.sh@855 -- # local i 00:16:36.660 12:13:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:36.660 12:13:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:36.660 12:13:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:36.660 12:13:29 -- common/autotest_common.sh@859 -- # break 00:16:36.660 12:13:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:36.660 12:13:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:36.660 12:13:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:36.660 1+0 records in 00:16:36.660 1+0 records out 00:16:36.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557997 s, 7.3 MB/s 00:16:36.661 12:13:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:36.661 12:13:29 -- common/autotest_common.sh@872 -- # size=4096 00:16:36.661 12:13:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:36.661 12:13:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:36.661 12:13:29 -- common/autotest_common.sh@875 -- # return 0 00:16:36.661 12:13:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.661 12:13:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.661 12:13:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:36.920 /dev/nbd1 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:36.920 12:13:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:36.920 12:13:30 -- common/autotest_common.sh@855 -- # local i 00:16:36.920 12:13:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:36.920 12:13:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:36.920 12:13:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:36.920 12:13:30 -- common/autotest_common.sh@859 -- # break 00:16:36.920 12:13:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:36.920 12:13:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:36.920 12:13:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:36.920 1+0 records in 00:16:36.920 1+0 records out 00:16:36.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330982 s, 12.4 MB/s 00:16:36.920 12:13:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:36.920 12:13:30 -- common/autotest_common.sh@872 -- # size=4096 00:16:36.920 12:13:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:36.920 12:13:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:36.920 12:13:30 -- common/autotest_common.sh@875 -- # return 0 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:36.920 12:13:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:37.179 { 00:16:37.179 "nbd_device": "/dev/nbd0", 00:16:37.179 "bdev_name": "Malloc0" 00:16:37.179 }, 00:16:37.179 { 00:16:37.179 "nbd_device": "/dev/nbd1", 00:16:37.179 "bdev_name": "Malloc1" 00:16:37.179 } 00:16:37.179 ]' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:37.179 { 00:16:37.179 "nbd_device": "/dev/nbd0", 00:16:37.179 "bdev_name": "Malloc0" 00:16:37.179 }, 00:16:37.179 { 00:16:37.179 "nbd_device": "/dev/nbd1", 00:16:37.179 "bdev_name": "Malloc1" 00:16:37.179 } 00:16:37.179 ]' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:37.179 /dev/nbd1' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:37.179 /dev/nbd1' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@65 -- # count=2 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@95 -- # count=2 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:37.179 256+0 records in 00:16:37.179 256+0 records out 00:16:37.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120497 s, 87.0 MB/s 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:37.179 256+0 records in 00:16:37.179 256+0 records out 00:16:37.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243673 s, 43.0 MB/s 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:37.179 256+0 records in 00:16:37.179 256+0 records out 00:16:37.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267399 s, 39.2 MB/s 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@51 -- # local i 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.179 12:13:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@41 -- # break 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.438 12:13:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@41 -- # break 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:37.697 12:13:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:37.955 12:13:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@65 -- # true 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@65 -- # count=0 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@104 -- # count=0 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:38.213 12:13:31 -- bdev/nbd_common.sh@109 -- # return 0 00:16:38.213 12:13:31 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:38.471 12:13:31 -- event/event.sh@35 -- # sleep 3 00:16:38.729 [2024-04-26 12:13:31.972463] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.729 [2024-04-26 12:13:32.076609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.729 [2024-04-26 12:13:32.076618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.729 [2024-04-26 12:13:32.134839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:38.729 [2024-04-26 12:13:32.134907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:42.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:42.013 12:13:34 -- event/event.sh@38 -- # waitforlisten 59813 /var/tmp/spdk-nbd.sock 00:16:42.014 12:13:34 -- common/autotest_common.sh@817 -- # '[' -z 59813 ']' 00:16:42.014 12:13:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:42.014 12:13:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.014 12:13:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:42.014 12:13:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.014 12:13:34 -- common/autotest_common.sh@10 -- # set +x 00:16:42.014 12:13:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.014 12:13:35 -- common/autotest_common.sh@850 -- # return 0 00:16:42.014 12:13:35 -- event/event.sh@39 -- # killprocess 59813 00:16:42.014 12:13:35 -- common/autotest_common.sh@936 -- # '[' -z 59813 ']' 00:16:42.014 12:13:35 -- common/autotest_common.sh@940 -- # kill -0 59813 00:16:42.014 12:13:35 -- common/autotest_common.sh@941 -- # uname 00:16:42.014 12:13:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.014 12:13:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59813 00:16:42.014 killing process with pid 59813 00:16:42.014 12:13:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.014 12:13:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.014 12:13:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59813' 00:16:42.014 12:13:35 -- common/autotest_common.sh@955 -- # kill 59813 00:16:42.014 12:13:35 -- common/autotest_common.sh@960 -- # wait 59813 00:16:42.014 spdk_app_start is called in Round 0. 00:16:42.014 Shutdown signal received, stop current app iteration 00:16:42.014 Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 reinitialization... 00:16:42.014 spdk_app_start is called in Round 1. 00:16:42.014 Shutdown signal received, stop current app iteration 00:16:42.014 Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 reinitialization... 00:16:42.014 spdk_app_start is called in Round 2. 00:16:42.014 Shutdown signal received, stop current app iteration 00:16:42.014 Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 reinitialization... 00:16:42.014 spdk_app_start is called in Round 3. 00:16:42.014 Shutdown signal received, stop current app iteration 00:16:42.014 ************************************ 00:16:42.014 END TEST app_repeat 00:16:42.014 ************************************ 00:16:42.014 12:13:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:42.014 12:13:35 -- event/event.sh@42 -- # return 0 00:16:42.014 00:16:42.014 real 0m19.448s 00:16:42.014 user 0m43.606s 00:16:42.014 sys 0m2.940s 00:16:42.014 12:13:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:42.014 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:42.014 12:13:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:42.014 12:13:35 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:42.014 12:13:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:42.014 12:13:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.014 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:42.014 ************************************ 00:16:42.014 START TEST cpu_locks 00:16:42.014 ************************************ 00:16:42.014 12:13:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:42.014 * Looking for test storage... 00:16:42.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:42.282 12:13:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:42.282 12:13:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:42.282 12:13:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:42.282 12:13:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:42.282 12:13:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:42.282 12:13:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.282 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:42.282 ************************************ 00:16:42.282 START TEST default_locks 00:16:42.282 ************************************ 00:16:42.282 12:13:35 -- common/autotest_common.sh@1111 -- # default_locks 00:16:42.282 12:13:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60260 00:16:42.282 12:13:35 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:42.282 12:13:35 -- event/cpu_locks.sh@47 -- # waitforlisten 60260 00:16:42.282 12:13:35 -- common/autotest_common.sh@817 -- # '[' -z 60260 ']' 00:16:42.282 12:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.282 12:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.282 12:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.282 12:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.282 12:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:42.282 [2024-04-26 12:13:35.624264] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:42.282 [2024-04-26 12:13:35.624364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60260 ] 00:16:42.542 [2024-04-26 12:13:35.760605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.542 [2024-04-26 12:13:35.895571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.151 12:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:43.151 12:13:36 -- common/autotest_common.sh@850 -- # return 0 00:16:43.151 12:13:36 -- event/cpu_locks.sh@49 -- # locks_exist 60260 00:16:43.151 12:13:36 -- event/cpu_locks.sh@22 -- # lslocks -p 60260 00:16:43.151 12:13:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:43.718 12:13:37 -- event/cpu_locks.sh@50 -- # killprocess 60260 00:16:43.718 12:13:37 -- common/autotest_common.sh@936 -- # '[' -z 60260 ']' 00:16:43.718 12:13:37 -- common/autotest_common.sh@940 -- # kill -0 60260 00:16:43.718 12:13:37 -- common/autotest_common.sh@941 -- # uname 00:16:43.718 12:13:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.718 12:13:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60260 00:16:43.718 12:13:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:43.718 12:13:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:43.718 killing process with pid 60260 00:16:43.718 12:13:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60260' 00:16:43.718 12:13:37 -- common/autotest_common.sh@955 -- # kill 60260 00:16:43.718 12:13:37 -- common/autotest_common.sh@960 -- # wait 60260 00:16:44.284 12:13:37 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60260 00:16:44.284 12:13:37 -- common/autotest_common.sh@638 -- # local es=0 00:16:44.284 12:13:37 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60260 00:16:44.284 12:13:37 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:44.284 12:13:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.284 12:13:37 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:44.284 12:13:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.284 12:13:37 -- common/autotest_common.sh@641 -- # waitforlisten 60260 00:16:44.284 12:13:37 -- common/autotest_common.sh@817 -- # '[' -z 60260 ']' 00:16:44.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.284 ERROR: process (pid: 60260) is no longer running 00:16:44.284 12:13:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.284 12:13:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.284 12:13:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.284 12:13:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.284 12:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:44.284 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60260) - No such process 00:16:44.284 12:13:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:44.284 12:13:37 -- common/autotest_common.sh@850 -- # return 1 00:16:44.284 12:13:37 -- common/autotest_common.sh@641 -- # es=1 00:16:44.284 12:13:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:44.284 12:13:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:44.284 12:13:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:44.284 12:13:37 -- event/cpu_locks.sh@54 -- # no_locks 00:16:44.284 12:13:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:44.284 12:13:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:16:44.284 12:13:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:44.284 00:16:44.284 real 0m2.028s 00:16:44.284 user 0m2.142s 00:16:44.284 sys 0m0.630s 00:16:44.284 12:13:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:44.284 12:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:44.284 ************************************ 00:16:44.284 END TEST default_locks 00:16:44.284 ************************************ 00:16:44.284 12:13:37 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:44.284 12:13:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:44.284 12:13:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.284 12:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:44.284 ************************************ 00:16:44.284 START TEST default_locks_via_rpc 00:16:44.284 ************************************ 00:16:44.284 12:13:37 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:16:44.284 12:13:37 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60316 00:16:44.284 12:13:37 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:44.284 12:13:37 -- event/cpu_locks.sh@63 -- # waitforlisten 60316 00:16:44.284 12:13:37 -- common/autotest_common.sh@817 -- # '[' -z 60316 ']' 00:16:44.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.284 12:13:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.284 12:13:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.284 12:13:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.284 12:13:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.284 12:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:44.542 [2024-04-26 12:13:37.769508] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:44.542 [2024-04-26 12:13:37.769791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:16:44.542 [2024-04-26 12:13:37.904723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.800 [2024-04-26 12:13:38.061163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.366 12:13:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.366 12:13:38 -- common/autotest_common.sh@850 -- # return 0 00:16:45.366 12:13:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:45.366 12:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.366 12:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:45.366 12:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.366 12:13:38 -- event/cpu_locks.sh@67 -- # no_locks 00:16:45.366 12:13:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:45.366 12:13:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:16:45.366 12:13:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:45.366 12:13:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:45.366 12:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.366 12:13:38 -- common/autotest_common.sh@10 -- # set +x 00:16:45.366 12:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.366 12:13:38 -- event/cpu_locks.sh@71 -- # locks_exist 60316 00:16:45.366 12:13:38 -- event/cpu_locks.sh@22 -- # lslocks -p 60316 00:16:45.366 12:13:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:45.932 12:13:39 -- event/cpu_locks.sh@73 -- # killprocess 60316 00:16:45.932 12:13:39 -- common/autotest_common.sh@936 -- # '[' -z 60316 ']' 00:16:45.932 12:13:39 -- common/autotest_common.sh@940 -- # kill -0 60316 00:16:45.932 12:13:39 -- common/autotest_common.sh@941 -- # uname 00:16:45.932 12:13:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.932 12:13:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60316 00:16:45.932 killing process with pid 60316 00:16:45.932 12:13:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.932 12:13:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.932 12:13:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60316' 00:16:45.932 12:13:39 -- common/autotest_common.sh@955 -- # kill 60316 00:16:45.932 12:13:39 -- common/autotest_common.sh@960 -- # wait 60316 00:16:46.190 00:16:46.190 real 0m1.869s 00:16:46.190 user 0m1.944s 00:16:46.190 sys 0m0.553s 00:16:46.190 12:13:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.190 12:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:46.190 ************************************ 00:16:46.190 END TEST default_locks_via_rpc 00:16:46.190 ************************************ 00:16:46.190 12:13:39 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:46.190 12:13:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:46.190 12:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.190 12:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:46.448 ************************************ 00:16:46.448 START TEST non_locking_app_on_locked_coremask 00:16:46.448 ************************************ 00:16:46.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.448 12:13:39 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:16:46.448 12:13:39 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60371 00:16:46.448 12:13:39 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:46.448 12:13:39 -- event/cpu_locks.sh@81 -- # waitforlisten 60371 /var/tmp/spdk.sock 00:16:46.448 12:13:39 -- common/autotest_common.sh@817 -- # '[' -z 60371 ']' 00:16:46.448 12:13:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.448 12:13:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.448 12:13:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.449 12:13:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.449 12:13:39 -- common/autotest_common.sh@10 -- # set +x 00:16:46.449 [2024-04-26 12:13:39.746280] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:46.449 [2024-04-26 12:13:39.746391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60371 ] 00:16:46.449 [2024-04-26 12:13:39.884449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.707 [2024-04-26 12:13:40.022854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:47.273 12:13:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.273 12:13:40 -- common/autotest_common.sh@850 -- # return 0 00:16:47.273 12:13:40 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60387 00:16:47.273 12:13:40 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:47.273 12:13:40 -- event/cpu_locks.sh@85 -- # waitforlisten 60387 /var/tmp/spdk2.sock 00:16:47.273 12:13:40 -- common/autotest_common.sh@817 -- # '[' -z 60387 ']' 00:16:47.273 12:13:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:47.273 12:13:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.273 12:13:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:47.273 12:13:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.273 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:16:47.531 [2024-04-26 12:13:40.791565] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:47.531 [2024-04-26 12:13:40.791993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:16:47.531 [2024-04-26 12:13:40.938589] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:47.531 [2024-04-26 12:13:40.938648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.789 [2024-04-26 12:13:41.196816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.356 12:13:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.356 12:13:41 -- common/autotest_common.sh@850 -- # return 0 00:16:48.356 12:13:41 -- event/cpu_locks.sh@87 -- # locks_exist 60371 00:16:48.356 12:13:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:48.356 12:13:41 -- event/cpu_locks.sh@22 -- # lslocks -p 60371 00:16:49.292 12:13:42 -- event/cpu_locks.sh@89 -- # killprocess 60371 00:16:49.292 12:13:42 -- common/autotest_common.sh@936 -- # '[' -z 60371 ']' 00:16:49.292 12:13:42 -- common/autotest_common.sh@940 -- # kill -0 60371 00:16:49.292 12:13:42 -- common/autotest_common.sh@941 -- # uname 00:16:49.292 12:13:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.292 12:13:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60371 00:16:49.292 killing process with pid 60371 00:16:49.292 12:13:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:49.292 12:13:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:49.292 12:13:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60371' 00:16:49.292 12:13:42 -- common/autotest_common.sh@955 -- # kill 60371 00:16:49.292 12:13:42 -- common/autotest_common.sh@960 -- # wait 60371 00:16:50.227 12:13:43 -- event/cpu_locks.sh@90 -- # killprocess 60387 00:16:50.227 12:13:43 -- common/autotest_common.sh@936 -- # '[' -z 60387 ']' 00:16:50.227 12:13:43 -- common/autotest_common.sh@940 -- # kill -0 60387 00:16:50.227 12:13:43 -- common/autotest_common.sh@941 -- # uname 00:16:50.227 12:13:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.227 12:13:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60387 00:16:50.227 killing process with pid 60387 00:16:50.227 12:13:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:50.227 12:13:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:50.227 12:13:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60387' 00:16:50.227 12:13:43 -- common/autotest_common.sh@955 -- # kill 60387 00:16:50.227 12:13:43 -- common/autotest_common.sh@960 -- # wait 60387 00:16:50.794 00:16:50.794 real 0m4.391s 00:16:50.794 user 0m4.892s 00:16:50.794 sys 0m1.198s 00:16:50.794 12:13:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.794 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:16:50.794 ************************************ 00:16:50.794 END TEST non_locking_app_on_locked_coremask 00:16:50.794 ************************************ 00:16:50.794 12:13:44 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:50.794 12:13:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:50.794 12:13:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.794 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:16:50.794 ************************************ 00:16:50.794 START TEST locking_app_on_unlocked_coremask 00:16:50.794 ************************************ 00:16:50.794 12:13:44 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:16:50.794 12:13:44 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60459 00:16:50.794 12:13:44 -- event/cpu_locks.sh@99 -- # waitforlisten 60459 /var/tmp/spdk.sock 00:16:50.794 12:13:44 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:50.794 12:13:44 -- common/autotest_common.sh@817 -- # '[' -z 60459 ']' 00:16:50.794 12:13:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.794 12:13:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.794 12:13:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.794 12:13:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.794 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:16:50.794 [2024-04-26 12:13:44.240442] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:50.794 [2024-04-26 12:13:44.240546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:16:51.052 [2024-04-26 12:13:44.377872] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:51.052 [2024-04-26 12:13:44.377930] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.309 [2024-04-26 12:13:44.529466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.874 12:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.874 12:13:45 -- common/autotest_common.sh@850 -- # return 0 00:16:51.874 12:13:45 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60476 00:16:51.874 12:13:45 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:51.874 12:13:45 -- event/cpu_locks.sh@103 -- # waitforlisten 60476 /var/tmp/spdk2.sock 00:16:51.874 12:13:45 -- common/autotest_common.sh@817 -- # '[' -z 60476 ']' 00:16:51.874 12:13:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:51.874 12:13:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.874 12:13:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:51.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:51.874 12:13:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.874 12:13:45 -- common/autotest_common.sh@10 -- # set +x 00:16:51.874 [2024-04-26 12:13:45.334826] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:51.874 [2024-04-26 12:13:45.335194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60476 ] 00:16:52.132 [2024-04-26 12:13:45.481760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.390 [2024-04-26 12:13:45.740694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.956 12:13:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.956 12:13:46 -- common/autotest_common.sh@850 -- # return 0 00:16:52.956 12:13:46 -- event/cpu_locks.sh@105 -- # locks_exist 60476 00:16:52.956 12:13:46 -- event/cpu_locks.sh@22 -- # lslocks -p 60476 00:16:52.956 12:13:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:53.891 12:13:47 -- event/cpu_locks.sh@107 -- # killprocess 60459 00:16:53.892 12:13:47 -- common/autotest_common.sh@936 -- # '[' -z 60459 ']' 00:16:53.892 12:13:47 -- common/autotest_common.sh@940 -- # kill -0 60459 00:16:53.892 12:13:47 -- common/autotest_common.sh@941 -- # uname 00:16:53.892 12:13:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.892 12:13:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60459 00:16:53.892 killing process with pid 60459 00:16:53.892 12:13:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.892 12:13:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.892 12:13:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60459' 00:16:53.892 12:13:47 -- common/autotest_common.sh@955 -- # kill 60459 00:16:53.892 12:13:47 -- common/autotest_common.sh@960 -- # wait 60459 00:16:54.827 12:13:48 -- event/cpu_locks.sh@108 -- # killprocess 60476 00:16:54.827 12:13:48 -- common/autotest_common.sh@936 -- # '[' -z 60476 ']' 00:16:54.827 12:13:48 -- common/autotest_common.sh@940 -- # kill -0 60476 00:16:54.827 12:13:48 -- common/autotest_common.sh@941 -- # uname 00:16:54.827 12:13:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.827 12:13:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60476 00:16:54.827 killing process with pid 60476 00:16:54.827 12:13:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.827 12:13:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.827 12:13:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60476' 00:16:54.827 12:13:48 -- common/autotest_common.sh@955 -- # kill 60476 00:16:54.827 12:13:48 -- common/autotest_common.sh@960 -- # wait 60476 00:16:55.086 ************************************ 00:16:55.086 END TEST locking_app_on_unlocked_coremask 00:16:55.086 ************************************ 00:16:55.086 00:16:55.086 real 0m4.304s 00:16:55.086 user 0m4.863s 00:16:55.086 sys 0m1.122s 00:16:55.086 12:13:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:55.086 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.086 12:13:48 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:55.086 12:13:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:55.086 12:13:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.086 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.344 ************************************ 00:16:55.344 START TEST locking_app_on_locked_coremask 00:16:55.344 ************************************ 00:16:55.344 12:13:48 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:16:55.344 12:13:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60552 00:16:55.344 12:13:48 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:55.344 12:13:48 -- event/cpu_locks.sh@116 -- # waitforlisten 60552 /var/tmp/spdk.sock 00:16:55.344 12:13:48 -- common/autotest_common.sh@817 -- # '[' -z 60552 ']' 00:16:55.344 12:13:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.344 12:13:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:55.344 12:13:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.344 12:13:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:55.344 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.344 [2024-04-26 12:13:48.656198] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:55.344 [2024-04-26 12:13:48.657111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:16:55.344 [2024-04-26 12:13:48.796287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.603 [2024-04-26 12:13:48.939891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.539 12:13:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:56.539 12:13:49 -- common/autotest_common.sh@850 -- # return 0 00:16:56.539 12:13:49 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60568 00:16:56.539 12:13:49 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60568 /var/tmp/spdk2.sock 00:16:56.539 12:13:49 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:56.539 12:13:49 -- common/autotest_common.sh@638 -- # local es=0 00:16:56.539 12:13:49 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60568 /var/tmp/spdk2.sock 00:16:56.539 12:13:49 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:56.539 12:13:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:56.539 12:13:49 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:56.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:56.539 12:13:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:56.539 12:13:49 -- common/autotest_common.sh@641 -- # waitforlisten 60568 /var/tmp/spdk2.sock 00:16:56.539 12:13:49 -- common/autotest_common.sh@817 -- # '[' -z 60568 ']' 00:16:56.539 12:13:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:56.539 12:13:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:56.539 12:13:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:56.539 12:13:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:56.539 12:13:49 -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 [2024-04-26 12:13:49.718185] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:56.539 [2024-04-26 12:13:49.718295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60568 ] 00:16:56.539 [2024-04-26 12:13:49.863520] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60552 has claimed it. 00:16:56.539 [2024-04-26 12:13:49.863599] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:57.106 ERROR: process (pid: 60568) is no longer running 00:16:57.106 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60568) - No such process 00:16:57.106 12:13:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:57.106 12:13:50 -- common/autotest_common.sh@850 -- # return 1 00:16:57.106 12:13:50 -- common/autotest_common.sh@641 -- # es=1 00:16:57.106 12:13:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:57.106 12:13:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:57.106 12:13:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:57.106 12:13:50 -- event/cpu_locks.sh@122 -- # locks_exist 60552 00:16:57.106 12:13:50 -- event/cpu_locks.sh@22 -- # lslocks -p 60552 00:16:57.106 12:13:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:57.673 12:13:50 -- event/cpu_locks.sh@124 -- # killprocess 60552 00:16:57.673 12:13:50 -- common/autotest_common.sh@936 -- # '[' -z 60552 ']' 00:16:57.673 12:13:50 -- common/autotest_common.sh@940 -- # kill -0 60552 00:16:57.673 12:13:50 -- common/autotest_common.sh@941 -- # uname 00:16:57.673 12:13:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.673 12:13:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60552 00:16:57.673 killing process with pid 60552 00:16:57.673 12:13:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:57.673 12:13:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:57.673 12:13:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60552' 00:16:57.673 12:13:50 -- common/autotest_common.sh@955 -- # kill 60552 00:16:57.673 12:13:50 -- common/autotest_common.sh@960 -- # wait 60552 00:16:57.932 ************************************ 00:16:57.932 END TEST locking_app_on_locked_coremask 00:16:57.932 ************************************ 00:16:57.932 00:16:57.932 real 0m2.734s 00:16:57.932 user 0m3.149s 00:16:57.932 sys 0m0.664s 00:16:57.932 12:13:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:57.932 12:13:51 -- common/autotest_common.sh@10 -- # set +x 00:16:57.932 12:13:51 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:57.932 12:13:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:57.932 12:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:57.932 12:13:51 -- common/autotest_common.sh@10 -- # set +x 00:16:58.191 ************************************ 00:16:58.191 START TEST locking_overlapped_coremask 00:16:58.191 ************************************ 00:16:58.191 12:13:51 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:16:58.191 12:13:51 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60623 00:16:58.191 12:13:51 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:58.191 12:13:51 -- event/cpu_locks.sh@133 -- # waitforlisten 60623 /var/tmp/spdk.sock 00:16:58.191 12:13:51 -- common/autotest_common.sh@817 -- # '[' -z 60623 ']' 00:16:58.191 12:13:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.191 12:13:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.191 12:13:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.191 12:13:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.191 12:13:51 -- common/autotest_common.sh@10 -- # set +x 00:16:58.191 [2024-04-26 12:13:51.510566] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:58.191 [2024-04-26 12:13:51.511027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:16:58.191 [2024-04-26 12:13:51.653860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:58.449 [2024-04-26 12:13:51.808406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.449 [2024-04-26 12:13:51.808487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.449 [2024-04-26 12:13:51.808480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.402 12:13:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.402 12:13:52 -- common/autotest_common.sh@850 -- # return 0 00:16:59.403 12:13:52 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:59.403 12:13:52 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60641 00:16:59.403 12:13:52 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60641 /var/tmp/spdk2.sock 00:16:59.403 12:13:52 -- common/autotest_common.sh@638 -- # local es=0 00:16:59.403 12:13:52 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60641 /var/tmp/spdk2.sock 00:16:59.403 12:13:52 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:59.403 12:13:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:59.403 12:13:52 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:59.403 12:13:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:59.403 12:13:52 -- common/autotest_common.sh@641 -- # waitforlisten 60641 /var/tmp/spdk2.sock 00:16:59.403 12:13:52 -- common/autotest_common.sh@817 -- # '[' -z 60641 ']' 00:16:59.403 12:13:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:59.403 12:13:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.403 12:13:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:59.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:59.403 12:13:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.403 12:13:52 -- common/autotest_common.sh@10 -- # set +x 00:16:59.403 [2024-04-26 12:13:52.587808] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:16:59.403 [2024-04-26 12:13:52.588188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:16:59.403 [2024-04-26 12:13:52.729863] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60623 has claimed it. 00:16:59.403 [2024-04-26 12:13:52.729949] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:59.995 ERROR: process (pid: 60641) is no longer running 00:16:59.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60641) - No such process 00:16:59.995 12:13:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.995 12:13:53 -- common/autotest_common.sh@850 -- # return 1 00:16:59.995 12:13:53 -- common/autotest_common.sh@641 -- # es=1 00:16:59.995 12:13:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:59.995 12:13:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:59.995 12:13:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:59.995 12:13:53 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:59.995 12:13:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:59.995 12:13:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:59.995 12:13:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:59.995 12:13:53 -- event/cpu_locks.sh@141 -- # killprocess 60623 00:16:59.995 12:13:53 -- common/autotest_common.sh@936 -- # '[' -z 60623 ']' 00:16:59.995 12:13:53 -- common/autotest_common.sh@940 -- # kill -0 60623 00:16:59.995 12:13:53 -- common/autotest_common.sh@941 -- # uname 00:16:59.995 12:13:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.995 12:13:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60623 00:16:59.995 12:13:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:59.995 12:13:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:59.995 12:13:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60623' 00:16:59.995 killing process with pid 60623 00:16:59.995 12:13:53 -- common/autotest_common.sh@955 -- # kill 60623 00:16:59.995 12:13:53 -- common/autotest_common.sh@960 -- # wait 60623 00:17:00.558 00:17:00.558 real 0m2.538s 00:17:00.558 user 0m6.823s 00:17:00.558 sys 0m0.489s 00:17:00.558 12:13:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.558 12:13:53 -- common/autotest_common.sh@10 -- # set +x 00:17:00.558 ************************************ 00:17:00.558 END TEST locking_overlapped_coremask 00:17:00.558 ************************************ 00:17:00.558 12:13:54 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:00.558 12:13:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:00.558 12:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.558 12:13:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.817 ************************************ 00:17:00.817 START TEST locking_overlapped_coremask_via_rpc 00:17:00.817 ************************************ 00:17:00.817 12:13:54 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:17:00.817 12:13:54 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60692 00:17:00.817 12:13:54 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:00.817 12:13:54 -- event/cpu_locks.sh@149 -- # waitforlisten 60692 /var/tmp/spdk.sock 00:17:00.817 12:13:54 -- common/autotest_common.sh@817 -- # '[' -z 60692 ']' 00:17:00.817 12:13:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.817 12:13:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:00.817 12:13:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.817 12:13:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:00.817 12:13:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.817 [2024-04-26 12:13:54.147394] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:00.817 [2024-04-26 12:13:54.147493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60692 ] 00:17:00.817 [2024-04-26 12:13:54.279634] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:00.817 [2024-04-26 12:13:54.279706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.075 [2024-04-26 12:13:54.429376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.075 [2024-04-26 12:13:54.429497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.075 [2024-04-26 12:13:54.429501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:01.642 12:13:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:01.642 12:13:55 -- common/autotest_common.sh@850 -- # return 0 00:17:01.642 12:13:55 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60710 00:17:01.642 12:13:55 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:01.642 12:13:55 -- event/cpu_locks.sh@153 -- # waitforlisten 60710 /var/tmp/spdk2.sock 00:17:01.642 12:13:55 -- common/autotest_common.sh@817 -- # '[' -z 60710 ']' 00:17:01.642 12:13:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:01.642 12:13:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:01.642 12:13:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:01.642 12:13:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:01.642 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:17:01.900 [2024-04-26 12:13:55.156739] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:01.900 [2024-04-26 12:13:55.156842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:17:01.900 [2024-04-26 12:13:55.303531] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:01.900 [2024-04-26 12:13:55.303603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.158 [2024-04-26 12:13:55.569051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.158 [2024-04-26 12:13:55.569145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.158 [2024-04-26 12:13:55.569149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.725 12:13:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:02.725 12:13:56 -- common/autotest_common.sh@850 -- # return 0 00:17:02.725 12:13:56 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:02.725 12:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.725 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:02.725 12:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.725 12:13:56 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:02.725 12:13:56 -- common/autotest_common.sh@638 -- # local es=0 00:17:02.725 12:13:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:02.725 12:13:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:02.725 12:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:02.725 12:13:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:02.725 12:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:02.725 12:13:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:02.725 12:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.725 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:02.725 [2024-04-26 12:13:56.145344] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60692 has claimed it. 00:17:02.725 request: 00:17:02.725 { 00:17:02.725 "method": "framework_enable_cpumask_locks", 00:17:02.725 "req_id": 1 00:17:02.725 } 00:17:02.725 Got JSON-RPC error response 00:17:02.725 response: 00:17:02.725 { 00:17:02.725 "code": -32603, 00:17:02.725 "message": "Failed to claim CPU core: 2" 00:17:02.725 } 00:17:02.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.725 12:13:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:02.725 12:13:56 -- common/autotest_common.sh@641 -- # es=1 00:17:02.725 12:13:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:02.725 12:13:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:02.725 12:13:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:02.725 12:13:56 -- event/cpu_locks.sh@158 -- # waitforlisten 60692 /var/tmp/spdk.sock 00:17:02.725 12:13:56 -- common/autotest_common.sh@817 -- # '[' -z 60692 ']' 00:17:02.725 12:13:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.725 12:13:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.725 12:13:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.725 12:13:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.725 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:02.984 12:13:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:02.984 12:13:56 -- common/autotest_common.sh@850 -- # return 0 00:17:02.984 12:13:56 -- event/cpu_locks.sh@159 -- # waitforlisten 60710 /var/tmp/spdk2.sock 00:17:02.984 12:13:56 -- common/autotest_common.sh@817 -- # '[' -z 60710 ']' 00:17:02.984 12:13:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:02.984 12:13:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.984 12:13:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:02.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:02.984 12:13:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.984 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:03.243 12:13:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:03.243 12:13:56 -- common/autotest_common.sh@850 -- # return 0 00:17:03.243 12:13:56 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:03.243 12:13:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:03.243 12:13:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:03.243 12:13:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:03.243 00:17:03.243 real 0m2.579s 00:17:03.243 user 0m1.323s 00:17:03.243 sys 0m0.189s 00:17:03.243 12:13:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:03.243 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:03.243 ************************************ 00:17:03.243 END TEST locking_overlapped_coremask_via_rpc 00:17:03.243 ************************************ 00:17:03.243 12:13:56 -- event/cpu_locks.sh@174 -- # cleanup 00:17:03.243 12:13:56 -- event/cpu_locks.sh@15 -- # [[ -z 60692 ]] 00:17:03.243 12:13:56 -- event/cpu_locks.sh@15 -- # killprocess 60692 00:17:03.243 12:13:56 -- common/autotest_common.sh@936 -- # '[' -z 60692 ']' 00:17:03.243 12:13:56 -- common/autotest_common.sh@940 -- # kill -0 60692 00:17:03.243 12:13:56 -- common/autotest_common.sh@941 -- # uname 00:17:03.243 12:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.243 12:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60692 00:17:03.512 killing process with pid 60692 00:17:03.512 12:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:03.512 12:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:03.512 12:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60692' 00:17:03.512 12:13:56 -- common/autotest_common.sh@955 -- # kill 60692 00:17:03.512 12:13:56 -- common/autotest_common.sh@960 -- # wait 60692 00:17:03.784 12:13:57 -- event/cpu_locks.sh@16 -- # [[ -z 60710 ]] 00:17:03.784 12:13:57 -- event/cpu_locks.sh@16 -- # killprocess 60710 00:17:03.784 12:13:57 -- common/autotest_common.sh@936 -- # '[' -z 60710 ']' 00:17:03.784 12:13:57 -- common/autotest_common.sh@940 -- # kill -0 60710 00:17:03.784 12:13:57 -- common/autotest_common.sh@941 -- # uname 00:17:03.784 12:13:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.784 12:13:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60710 00:17:04.042 killing process with pid 60710 00:17:04.042 12:13:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:04.042 12:13:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:04.042 12:13:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60710' 00:17:04.042 12:13:57 -- common/autotest_common.sh@955 -- # kill 60710 00:17:04.042 12:13:57 -- common/autotest_common.sh@960 -- # wait 60710 00:17:04.301 12:13:57 -- event/cpu_locks.sh@18 -- # rm -f 00:17:04.301 12:13:57 -- event/cpu_locks.sh@1 -- # cleanup 00:17:04.301 12:13:57 -- event/cpu_locks.sh@15 -- # [[ -z 60692 ]] 00:17:04.301 12:13:57 -- event/cpu_locks.sh@15 -- # killprocess 60692 00:17:04.301 Process with pid 60692 is not found 00:17:04.301 12:13:57 -- common/autotest_common.sh@936 -- # '[' -z 60692 ']' 00:17:04.301 12:13:57 -- common/autotest_common.sh@940 -- # kill -0 60692 00:17:04.301 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60692) - No such process 00:17:04.301 12:13:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60692 is not found' 00:17:04.301 12:13:57 -- event/cpu_locks.sh@16 -- # [[ -z 60710 ]] 00:17:04.301 12:13:57 -- event/cpu_locks.sh@16 -- # killprocess 60710 00:17:04.301 12:13:57 -- common/autotest_common.sh@936 -- # '[' -z 60710 ']' 00:17:04.301 12:13:57 -- common/autotest_common.sh@940 -- # kill -0 60710 00:17:04.301 Process with pid 60710 is not found 00:17:04.301 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60710) - No such process 00:17:04.301 12:13:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60710 is not found' 00:17:04.301 12:13:57 -- event/cpu_locks.sh@18 -- # rm -f 00:17:04.301 00:17:04.301 real 0m22.372s 00:17:04.301 user 0m37.824s 00:17:04.301 sys 0m5.959s 00:17:04.301 ************************************ 00:17:04.301 END TEST cpu_locks 00:17:04.301 ************************************ 00:17:04.301 12:13:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.301 12:13:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.559 ************************************ 00:17:04.559 END TEST event 00:17:04.559 ************************************ 00:17:04.559 00:17:04.559 real 0m49.993s 00:17:04.559 user 1m34.061s 00:17:04.559 sys 0m9.929s 00:17:04.559 12:13:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.559 12:13:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.559 12:13:57 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:04.559 12:13:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:04.559 12:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.559 12:13:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.559 ************************************ 00:17:04.559 START TEST thread 00:17:04.559 ************************************ 00:17:04.559 12:13:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:04.559 * Looking for test storage... 00:17:04.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:04.559 12:13:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:04.559 12:13:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:04.559 12:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.559 12:13:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.817 ************************************ 00:17:04.817 START TEST thread_poller_perf 00:17:04.817 ************************************ 00:17:04.817 12:13:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:04.817 [2024-04-26 12:13:58.086283] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:04.817 [2024-04-26 12:13:58.086934] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:17:04.817 [2024-04-26 12:13:58.220611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.075 [2024-04-26 12:13:58.367766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.075 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:06.552 ====================================== 00:17:06.552 busy:2215965072 (cyc) 00:17:06.552 total_run_count: 312000 00:17:06.552 tsc_hz: 2200000000 (cyc) 00:17:06.552 ====================================== 00:17:06.552 poller_cost: 7102 (cyc), 3228 (nsec) 00:17:06.552 00:17:06.552 real 0m1.460s 00:17:06.552 user 0m1.287s 00:17:06.552 sys 0m0.061s 00:17:06.552 12:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:06.552 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:17:06.552 ************************************ 00:17:06.552 END TEST thread_poller_perf 00:17:06.552 ************************************ 00:17:06.552 12:13:59 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:06.552 12:13:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:06.552 12:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.552 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:17:06.552 ************************************ 00:17:06.552 START TEST thread_poller_perf 00:17:06.552 ************************************ 00:17:06.552 12:13:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:06.552 [2024-04-26 12:13:59.655595] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:06.552 [2024-04-26 12:13:59.655723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60887 ] 00:17:06.552 [2024-04-26 12:13:59.802197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.552 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:06.552 [2024-04-26 12:13:59.944160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.931 ====================================== 00:17:07.931 busy:2202227856 (cyc) 00:17:07.931 total_run_count: 4133000 00:17:07.931 tsc_hz: 2200000000 (cyc) 00:17:07.931 ====================================== 00:17:07.931 poller_cost: 532 (cyc), 241 (nsec) 00:17:07.931 00:17:07.931 real 0m1.437s 00:17:07.931 user 0m1.265s 00:17:07.931 sys 0m0.064s 00:17:07.931 ************************************ 00:17:07.931 END TEST thread_poller_perf 00:17:07.931 ************************************ 00:17:07.931 12:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:07.931 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.931 12:14:01 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:07.931 ************************************ 00:17:07.931 END TEST thread 00:17:07.931 ************************************ 00:17:07.931 00:17:07.931 real 0m3.201s 00:17:07.931 user 0m2.680s 00:17:07.931 sys 0m0.272s 00:17:07.931 12:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:07.931 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.931 12:14:01 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:07.931 12:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:07.931 12:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.931 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.931 ************************************ 00:17:07.931 START TEST accel 00:17:07.931 ************************************ 00:17:07.931 12:14:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:07.931 * Looking for test storage... 00:17:07.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:07.931 12:14:01 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:17:07.931 12:14:01 -- accel/accel.sh@82 -- # get_expected_opcs 00:17:07.931 12:14:01 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:07.931 12:14:01 -- accel/accel.sh@62 -- # spdk_tgt_pid=60961 00:17:07.931 12:14:01 -- accel/accel.sh@63 -- # waitforlisten 60961 00:17:07.931 12:14:01 -- accel/accel.sh@61 -- # build_accel_config 00:17:07.931 12:14:01 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:17:07.931 12:14:01 -- common/autotest_common.sh@817 -- # '[' -z 60961 ']' 00:17:07.931 12:14:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.931 12:14:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:07.932 12:14:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:07.932 12:14:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.932 12:14:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:07.932 12:14:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:07.932 12:14:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.932 12:14:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:07.932 12:14:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.932 12:14:01 -- accel/accel.sh@40 -- # local IFS=, 00:17:07.932 12:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.932 12:14:01 -- accel/accel.sh@41 -- # jq -r . 00:17:07.932 [2024-04-26 12:14:01.348278] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:07.932 [2024-04-26 12:14:01.348640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60961 ] 00:17:08.191 [2024-04-26 12:14:01.482835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.191 [2024-04-26 12:14:01.616189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.125 12:14:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.125 12:14:02 -- common/autotest_common.sh@850 -- # return 0 00:17:09.125 12:14:02 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:17:09.125 12:14:02 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:17:09.125 12:14:02 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:17:09.125 12:14:02 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:17:09.125 12:14:02 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:17:09.125 12:14:02 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:17:09.125 12:14:02 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:17:09.125 12:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.125 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.125 12:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.125 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.125 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.125 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.125 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.125 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.125 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.125 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.125 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.125 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.125 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.125 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # IFS== 00:17:09.126 12:14:02 -- accel/accel.sh@72 -- # read -r opc module 00:17:09.126 12:14:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:17:09.126 12:14:02 -- accel/accel.sh@75 -- # killprocess 60961 00:17:09.126 12:14:02 -- common/autotest_common.sh@936 -- # '[' -z 60961 ']' 00:17:09.126 12:14:02 -- common/autotest_common.sh@940 -- # kill -0 60961 00:17:09.126 12:14:02 -- common/autotest_common.sh@941 -- # uname 00:17:09.126 12:14:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.126 12:14:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60961 00:17:09.126 killing process with pid 60961 00:17:09.126 12:14:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.126 12:14:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.126 12:14:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60961' 00:17:09.126 12:14:02 -- common/autotest_common.sh@955 -- # kill 60961 00:17:09.126 12:14:02 -- common/autotest_common.sh@960 -- # wait 60961 00:17:09.692 12:14:02 -- accel/accel.sh@76 -- # trap - ERR 00:17:09.692 12:14:02 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:17:09.692 12:14:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:09.692 12:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.692 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 12:14:02 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:17:09.692 12:14:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:17:09.692 12:14:02 -- accel/accel.sh@12 -- # build_accel_config 00:17:09.692 12:14:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:09.692 12:14:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:09.692 12:14:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:09.692 12:14:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:09.692 12:14:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:09.692 12:14:02 -- accel/accel.sh@40 -- # local IFS=, 00:17:09.692 12:14:02 -- accel/accel.sh@41 -- # jq -r . 00:17:09.692 12:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:09.692 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 12:14:02 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:17:09.692 12:14:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:09.692 12:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.692 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 ************************************ 00:17:09.692 START TEST accel_missing_filename 00:17:09.692 ************************************ 00:17:09.692 12:14:03 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:17:09.692 12:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:17:09.692 12:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:17:09.692 12:14:03 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:17:09.692 12:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:09.692 12:14:03 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:17:09.692 12:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:09.692 12:14:03 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:17:09.692 12:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:17:09.692 12:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:17:09.692 12:14:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:09.692 12:14:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:09.692 12:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:09.692 12:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:09.692 12:14:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:09.692 12:14:03 -- accel/accel.sh@40 -- # local IFS=, 00:17:09.692 12:14:03 -- accel/accel.sh@41 -- # jq -r . 00:17:09.692 [2024-04-26 12:14:03.098958] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:09.692 [2024-04-26 12:14:03.099081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:17:09.950 [2024-04-26 12:14:03.234711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.950 [2024-04-26 12:14:03.354128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.950 [2024-04-26 12:14:03.409073] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:10.208 [2024-04-26 12:14:03.487827] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:17:10.208 A filename is required. 00:17:10.208 12:14:03 -- common/autotest_common.sh@641 -- # es=234 00:17:10.208 12:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:10.208 12:14:03 -- common/autotest_common.sh@650 -- # es=106 00:17:10.208 12:14:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:10.208 12:14:03 -- common/autotest_common.sh@658 -- # es=1 00:17:10.208 12:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:10.208 00:17:10.208 real 0m0.534s 00:17:10.208 user 0m0.364s 00:17:10.208 sys 0m0.114s 00:17:10.208 ************************************ 00:17:10.208 END TEST accel_missing_filename 00:17:10.208 ************************************ 00:17:10.208 12:14:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.208 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.208 12:14:03 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:10.208 12:14:03 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:17:10.208 12:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.208 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.467 ************************************ 00:17:10.467 START TEST accel_compress_verify 00:17:10.468 ************************************ 00:17:10.468 12:14:03 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:10.468 12:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:17:10.468 12:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:10.468 12:14:03 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:17:10.468 12:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:10.468 12:14:03 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:17:10.468 12:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:10.468 12:14:03 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:10.468 12:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:10.468 12:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:17:10.468 12:14:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:10.468 12:14:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:10.468 12:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:10.468 12:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:10.468 12:14:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:10.468 12:14:03 -- accel/accel.sh@40 -- # local IFS=, 00:17:10.468 12:14:03 -- accel/accel.sh@41 -- # jq -r . 00:17:10.468 [2024-04-26 12:14:03.746488] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:10.468 [2024-04-26 12:14:03.746628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61060 ] 00:17:10.468 [2024-04-26 12:14:03.889514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.726 [2024-04-26 12:14:04.007888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.726 [2024-04-26 12:14:04.064307] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:10.726 [2024-04-26 12:14:04.140759] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:17:10.984 00:17:10.984 Compression does not support the verify option, aborting. 00:17:10.984 12:14:04 -- common/autotest_common.sh@641 -- # es=161 00:17:10.984 12:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:10.984 ************************************ 00:17:10.984 END TEST accel_compress_verify 00:17:10.984 ************************************ 00:17:10.984 12:14:04 -- common/autotest_common.sh@650 -- # es=33 00:17:10.984 12:14:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:10.984 12:14:04 -- common/autotest_common.sh@658 -- # es=1 00:17:10.984 12:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:10.984 00:17:10.984 real 0m0.537s 00:17:10.984 user 0m0.368s 00:17:10.984 sys 0m0.116s 00:17:10.984 12:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.984 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.984 12:14:04 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:17:10.984 12:14:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:10.984 12:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.984 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.984 ************************************ 00:17:10.984 START TEST accel_wrong_workload 00:17:10.984 ************************************ 00:17:10.984 12:14:04 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:17:10.984 12:14:04 -- common/autotest_common.sh@638 -- # local es=0 00:17:10.984 12:14:04 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:17:10.984 12:14:04 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:17:10.984 12:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:10.984 12:14:04 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:17:10.984 12:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:10.984 12:14:04 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:17:10.984 12:14:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:17:10.984 12:14:04 -- accel/accel.sh@12 -- # build_accel_config 00:17:10.984 12:14:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:10.984 12:14:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:10.984 12:14:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:10.984 12:14:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:10.984 12:14:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:10.984 12:14:04 -- accel/accel.sh@40 -- # local IFS=, 00:17:10.984 12:14:04 -- accel/accel.sh@41 -- # jq -r . 00:17:10.984 Unsupported workload type: foobar 00:17:10.985 [2024-04-26 12:14:04.392551] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:17:10.985 accel_perf options: 00:17:10.985 [-h help message] 00:17:10.985 [-q queue depth per core] 00:17:10.985 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:17:10.985 [-T number of threads per core 00:17:10.985 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:17:10.985 [-t time in seconds] 00:17:10.985 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:17:10.985 [ dif_verify, , dif_generate, dif_generate_copy 00:17:10.985 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:17:10.985 [-l for compress/decompress workloads, name of uncompressed input file 00:17:10.985 [-S for crc32c workload, use this seed value (default 0) 00:17:10.985 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:17:10.985 [-f for fill workload, use this BYTE value (default 255) 00:17:10.985 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:17:10.985 [-y verify result if this switch is on] 00:17:10.985 [-a tasks to allocate per core (default: same value as -q)] 00:17:10.985 Can be used to spread operations across a wider range of memory. 00:17:10.985 12:14:04 -- common/autotest_common.sh@641 -- # es=1 00:17:10.985 12:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:10.985 12:14:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:10.985 12:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:10.985 00:17:10.985 real 0m0.036s 00:17:10.985 user 0m0.016s 00:17:10.985 sys 0m0.019s 00:17:10.985 ************************************ 00:17:10.985 END TEST accel_wrong_workload 00:17:10.985 ************************************ 00:17:10.985 12:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.985 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.985 12:14:04 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:17:10.985 12:14:04 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:17:10.985 12:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.985 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:11.243 ************************************ 00:17:11.243 START TEST accel_negative_buffers 00:17:11.243 ************************************ 00:17:11.243 12:14:04 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:17:11.243 12:14:04 -- common/autotest_common.sh@638 -- # local es=0 00:17:11.243 12:14:04 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:17:11.243 12:14:04 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:17:11.243 12:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:11.243 12:14:04 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:17:11.243 12:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:11.243 12:14:04 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:17:11.243 12:14:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:17:11.243 12:14:04 -- accel/accel.sh@12 -- # build_accel_config 00:17:11.243 12:14:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:11.243 12:14:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:11.243 12:14:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:11.243 12:14:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:11.243 12:14:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:11.243 12:14:04 -- accel/accel.sh@40 -- # local IFS=, 00:17:11.243 12:14:04 -- accel/accel.sh@41 -- # jq -r . 00:17:11.243 -x option must be non-negative. 00:17:11.243 [2024-04-26 12:14:04.542860] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:17:11.243 accel_perf options: 00:17:11.243 [-h help message] 00:17:11.243 [-q queue depth per core] 00:17:11.243 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:17:11.243 [-T number of threads per core 00:17:11.243 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:17:11.243 [-t time in seconds] 00:17:11.243 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:17:11.243 [ dif_verify, , dif_generate, dif_generate_copy 00:17:11.243 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:17:11.243 [-l for compress/decompress workloads, name of uncompressed input file 00:17:11.243 [-S for crc32c workload, use this seed value (default 0) 00:17:11.243 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:17:11.243 [-f for fill workload, use this BYTE value (default 255) 00:17:11.243 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:17:11.243 [-y verify result if this switch is on] 00:17:11.243 [-a tasks to allocate per core (default: same value as -q)] 00:17:11.243 Can be used to spread operations across a wider range of memory. 00:17:11.243 12:14:04 -- common/autotest_common.sh@641 -- # es=1 00:17:11.243 12:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:11.243 12:14:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:11.243 12:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:11.243 00:17:11.243 real 0m0.036s 00:17:11.243 user 0m0.013s 00:17:11.243 sys 0m0.018s 00:17:11.243 12:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.243 ************************************ 00:17:11.243 END TEST accel_negative_buffers 00:17:11.243 ************************************ 00:17:11.243 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:11.243 12:14:04 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:17:11.243 12:14:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:11.243 12:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.243 12:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:11.243 ************************************ 00:17:11.243 START TEST accel_crc32c 00:17:11.243 ************************************ 00:17:11.243 12:14:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:17:11.243 12:14:04 -- accel/accel.sh@16 -- # local accel_opc 00:17:11.243 12:14:04 -- accel/accel.sh@17 -- # local accel_module 00:17:11.243 12:14:04 -- accel/accel.sh@19 -- # IFS=: 00:17:11.243 12:14:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:17:11.243 12:14:04 -- accel/accel.sh@19 -- # read -r var val 00:17:11.243 12:14:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:17:11.243 12:14:04 -- accel/accel.sh@12 -- # build_accel_config 00:17:11.243 12:14:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:11.243 12:14:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:11.243 12:14:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:11.243 12:14:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:11.243 12:14:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:11.243 12:14:04 -- accel/accel.sh@40 -- # local IFS=, 00:17:11.243 12:14:04 -- accel/accel.sh@41 -- # jq -r . 00:17:11.243 [2024-04-26 12:14:04.683150] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:11.243 [2024-04-26 12:14:04.683293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61138 ] 00:17:11.508 [2024-04-26 12:14:04.827294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.508 [2024-04-26 12:14:04.959037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.766 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.766 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.766 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.766 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=0x1 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=crc32c 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=32 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=software 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@22 -- # accel_module=software 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=32 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=32 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=1 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val=Yes 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:11.767 12:14:05 -- accel/accel.sh@20 -- # val= 00:17:11.767 12:14:05 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # IFS=: 00:17:11.767 12:14:05 -- accel/accel.sh@19 -- # read -r var val 00:17:13.141 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.141 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.141 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.141 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.141 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.141 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.141 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.141 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.141 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.141 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.141 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.141 ************************************ 00:17:13.141 END TEST accel_crc32c 00:17:13.141 ************************************ 00:17:13.142 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.142 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.142 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.142 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.142 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.142 12:14:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:13.142 12:14:06 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:17:13.142 12:14:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:13.142 00:17:13.142 real 0m1.560s 00:17:13.142 user 0m1.346s 00:17:13.142 sys 0m0.119s 00:17:13.142 12:14:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.142 12:14:06 -- common/autotest_common.sh@10 -- # set +x 00:17:13.142 12:14:06 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:17:13.142 12:14:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:13.142 12:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.142 12:14:06 -- common/autotest_common.sh@10 -- # set +x 00:17:13.142 ************************************ 00:17:13.142 START TEST accel_crc32c_C2 00:17:13.142 ************************************ 00:17:13.142 12:14:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:17:13.142 12:14:06 -- accel/accel.sh@16 -- # local accel_opc 00:17:13.142 12:14:06 -- accel/accel.sh@17 -- # local accel_module 00:17:13.142 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.142 12:14:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:17:13.142 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.142 12:14:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:17:13.142 12:14:06 -- accel/accel.sh@12 -- # build_accel_config 00:17:13.142 12:14:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:13.142 12:14:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:13.142 12:14:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:13.142 12:14:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:13.142 12:14:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:13.142 12:14:06 -- accel/accel.sh@40 -- # local IFS=, 00:17:13.142 12:14:06 -- accel/accel.sh@41 -- # jq -r . 00:17:13.142 [2024-04-26 12:14:06.357522] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:13.142 [2024-04-26 12:14:06.357604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61172 ] 00:17:13.142 [2024-04-26 12:14:06.491301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.400 [2024-04-26 12:14:06.611430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=0x1 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=crc32c 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=0 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=software 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@22 -- # accel_module=software 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=32 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=32 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=1 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val=Yes 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:13.400 12:14:06 -- accel/accel.sh@20 -- # val= 00:17:13.400 12:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # IFS=: 00:17:13.400 12:14:06 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@20 -- # val= 00:17:14.805 12:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@20 -- # val= 00:17:14.805 12:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@20 -- # val= 00:17:14.805 12:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@20 -- # val= 00:17:14.805 12:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@20 -- # val= 00:17:14.805 12:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@20 -- # val= 00:17:14.805 12:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:07 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:14.805 12:14:07 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:17:14.805 12:14:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:14.805 00:17:14.805 real 0m1.603s 00:17:14.805 user 0m1.391s 00:17:14.805 sys 0m0.113s 00:17:14.805 12:14:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.805 12:14:07 -- common/autotest_common.sh@10 -- # set +x 00:17:14.805 ************************************ 00:17:14.805 END TEST accel_crc32c_C2 00:17:14.805 ************************************ 00:17:14.805 12:14:07 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:17:14.805 12:14:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:14.805 12:14:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.805 12:14:07 -- common/autotest_common.sh@10 -- # set +x 00:17:14.805 ************************************ 00:17:14.805 START TEST accel_copy 00:17:14.805 ************************************ 00:17:14.805 12:14:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:17:14.805 12:14:08 -- accel/accel.sh@16 -- # local accel_opc 00:17:14.805 12:14:08 -- accel/accel.sh@17 -- # local accel_module 00:17:14.805 12:14:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:17:14.805 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:14.805 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:14.805 12:14:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:17:14.805 12:14:08 -- accel/accel.sh@12 -- # build_accel_config 00:17:14.805 12:14:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:14.805 12:14:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:14.805 12:14:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:14.805 12:14:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:14.805 12:14:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:14.805 12:14:08 -- accel/accel.sh@40 -- # local IFS=, 00:17:14.805 12:14:08 -- accel/accel.sh@41 -- # jq -r . 00:17:14.805 [2024-04-26 12:14:08.079431] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:14.805 [2024-04-26 12:14:08.079547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:17:14.805 [2024-04-26 12:14:08.216129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.064 [2024-04-26 12:14:08.354958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.064 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.064 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.064 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.064 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.064 12:14:08 -- accel/accel.sh@20 -- # val=0x1 00:17:15.064 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.064 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.064 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.064 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.064 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.064 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.064 12:14:08 -- accel/accel.sh@20 -- # val=copy 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@23 -- # accel_opc=copy 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val=software 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@22 -- # accel_module=software 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val=32 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val=32 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val=1 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val=Yes 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:15.065 12:14:08 -- accel/accel.sh@20 -- # val= 00:17:15.065 12:14:08 -- accel/accel.sh@21 -- # case "$var" in 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # IFS=: 00:17:15.065 12:14:08 -- accel/accel.sh@19 -- # read -r var val 00:17:16.440 12:14:09 -- accel/accel.sh@20 -- # val= 00:17:16.440 12:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.440 12:14:09 -- accel/accel.sh@20 -- # val= 00:17:16.440 12:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.440 12:14:09 -- accel/accel.sh@20 -- # val= 00:17:16.440 12:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.440 12:14:09 -- accel/accel.sh@20 -- # val= 00:17:16.440 12:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.440 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.441 12:14:09 -- accel/accel.sh@20 -- # val= 00:17:16.441 12:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.441 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.441 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.441 12:14:09 -- accel/accel.sh@20 -- # val= 00:17:16.441 12:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.441 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.441 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.441 12:14:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:16.441 12:14:09 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:17:16.441 12:14:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.441 00:17:16.441 real 0m1.605s 00:17:16.441 user 0m1.385s 00:17:16.441 sys 0m0.126s 00:17:16.441 12:14:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:16.441 12:14:09 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 ************************************ 00:17:16.441 END TEST accel_copy 00:17:16.441 ************************************ 00:17:16.441 12:14:09 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:16.441 12:14:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:16.441 12:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.441 12:14:09 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 ************************************ 00:17:16.441 START TEST accel_fill 00:17:16.441 ************************************ 00:17:16.441 12:14:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:16.441 12:14:09 -- accel/accel.sh@16 -- # local accel_opc 00:17:16.441 12:14:09 -- accel/accel.sh@17 -- # local accel_module 00:17:16.441 12:14:09 -- accel/accel.sh@19 -- # IFS=: 00:17:16.441 12:14:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:16.441 12:14:09 -- accel/accel.sh@19 -- # read -r var val 00:17:16.441 12:14:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:16.441 12:14:09 -- accel/accel.sh@12 -- # build_accel_config 00:17:16.441 12:14:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:16.441 12:14:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:16.441 12:14:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:16.441 12:14:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:16.441 12:14:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:16.441 12:14:09 -- accel/accel.sh@40 -- # local IFS=, 00:17:16.441 12:14:09 -- accel/accel.sh@41 -- # jq -r . 00:17:16.441 [2024-04-26 12:14:09.814980] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:16.441 [2024-04-26 12:14:09.815140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:17:16.700 [2024-04-26 12:14:09.962215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.700 [2024-04-26 12:14:10.101467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.700 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.700 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.700 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.700 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.700 12:14:10 -- accel/accel.sh@20 -- # val=0x1 00:17:16.700 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.700 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.700 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.700 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.700 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.700 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.700 12:14:10 -- accel/accel.sh@20 -- # val=fill 00:17:16.700 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.700 12:14:10 -- accel/accel.sh@23 -- # accel_opc=fill 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val=0x80 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val=software 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@22 -- # accel_module=software 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val=64 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val=64 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val=1 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val=Yes 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:16.958 12:14:10 -- accel/accel.sh@20 -- # val= 00:17:16.958 12:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # IFS=: 00:17:16.958 12:14:10 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.333 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.333 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.333 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.333 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.333 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 ************************************ 00:17:18.333 END TEST accel_fill 00:17:18.333 ************************************ 00:17:18.333 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.333 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:18.333 12:14:11 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:17:18.333 12:14:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:18.333 00:17:18.333 real 0m1.591s 00:17:18.333 user 0m1.359s 00:17:18.333 sys 0m0.137s 00:17:18.333 12:14:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:18.333 12:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 12:14:11 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:17:18.333 12:14:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:18.333 12:14:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.333 12:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 ************************************ 00:17:18.333 START TEST accel_copy_crc32c 00:17:18.333 ************************************ 00:17:18.333 12:14:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:17:18.333 12:14:11 -- accel/accel.sh@16 -- # local accel_opc 00:17:18.333 12:14:11 -- accel/accel.sh@17 -- # local accel_module 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.333 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.333 12:14:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:17:18.333 12:14:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:17:18.333 12:14:11 -- accel/accel.sh@12 -- # build_accel_config 00:17:18.333 12:14:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:18.333 12:14:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:18.333 12:14:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:18.333 12:14:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:18.333 12:14:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:18.333 12:14:11 -- accel/accel.sh@40 -- # local IFS=, 00:17:18.333 12:14:11 -- accel/accel.sh@41 -- # jq -r . 00:17:18.333 [2024-04-26 12:14:11.519911] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:18.333 [2024-04-26 12:14:11.520003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:17:18.333 [2024-04-26 12:14:11.658207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.333 [2024-04-26 12:14:11.791606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=0x1 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=0 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=software 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@22 -- # accel_module=software 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=32 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=32 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=1 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val=Yes 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:18.592 12:14:11 -- accel/accel.sh@20 -- # val= 00:17:18.592 12:14:11 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # IFS=: 00:17:18.592 12:14:11 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:19.968 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:19.968 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:19.968 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:19.968 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:19.968 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:19.968 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:19.968 12:14:13 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:19.968 12:14:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:19.968 00:17:19.968 real 0m1.567s 00:17:19.968 user 0m1.360s 00:17:19.968 sys 0m0.114s 00:17:19.968 ************************************ 00:17:19.968 END TEST accel_copy_crc32c 00:17:19.968 ************************************ 00:17:19.968 12:14:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:19.968 12:14:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.968 12:14:13 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:17:19.968 12:14:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:19.968 12:14:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:19.968 12:14:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.968 ************************************ 00:17:19.968 START TEST accel_copy_crc32c_C2 00:17:19.968 ************************************ 00:17:19.968 12:14:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:17:19.968 12:14:13 -- accel/accel.sh@16 -- # local accel_opc 00:17:19.968 12:14:13 -- accel/accel.sh@17 -- # local accel_module 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:19.968 12:14:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:17:19.968 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:19.968 12:14:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:17:19.968 12:14:13 -- accel/accel.sh@12 -- # build_accel_config 00:17:19.968 12:14:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:19.968 12:14:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:19.968 12:14:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:19.968 12:14:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:19.968 12:14:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:19.968 12:14:13 -- accel/accel.sh@40 -- # local IFS=, 00:17:19.968 12:14:13 -- accel/accel.sh@41 -- # jq -r . 00:17:19.968 [2024-04-26 12:14:13.202792] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:19.968 [2024-04-26 12:14:13.202888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61337 ] 00:17:19.968 [2024-04-26 12:14:13.348967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.227 [2024-04-26 12:14:13.505073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=0x1 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=0 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val='8192 bytes' 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=software 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@22 -- # accel_module=software 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=32 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=32 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=1 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val=Yes 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:20.227 12:14:13 -- accel/accel.sh@20 -- # val= 00:17:20.227 12:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # IFS=: 00:17:20.227 12:14:13 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@20 -- # val= 00:17:21.602 12:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@20 -- # val= 00:17:21.602 12:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@20 -- # val= 00:17:21.602 12:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@20 -- # val= 00:17:21.602 12:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@20 -- # val= 00:17:21.602 12:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@20 -- # val= 00:17:21.602 12:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:21.602 12:14:14 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:21.602 12:14:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:21.602 00:17:21.602 real 0m1.594s 00:17:21.602 user 0m1.375s 00:17:21.602 sys 0m0.124s 00:17:21.602 12:14:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:21.602 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 ************************************ 00:17:21.602 END TEST accel_copy_crc32c_C2 00:17:21.602 ************************************ 00:17:21.602 12:14:14 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:17:21.602 12:14:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:21.602 12:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.602 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 ************************************ 00:17:21.602 START TEST accel_dualcast 00:17:21.602 ************************************ 00:17:21.602 12:14:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:17:21.602 12:14:14 -- accel/accel.sh@16 -- # local accel_opc 00:17:21.602 12:14:14 -- accel/accel.sh@17 -- # local accel_module 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # IFS=: 00:17:21.602 12:14:14 -- accel/accel.sh@19 -- # read -r var val 00:17:21.602 12:14:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:17:21.602 12:14:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:17:21.602 12:14:14 -- accel/accel.sh@12 -- # build_accel_config 00:17:21.602 12:14:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:21.602 12:14:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:21.602 12:14:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:21.602 12:14:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:21.602 12:14:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:21.602 12:14:14 -- accel/accel.sh@40 -- # local IFS=, 00:17:21.602 12:14:14 -- accel/accel.sh@41 -- # jq -r . 00:17:21.602 [2024-04-26 12:14:14.908721] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:21.602 [2024-04-26 12:14:14.908817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61376 ] 00:17:21.602 [2024-04-26 12:14:15.044792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.861 [2024-04-26 12:14:15.182515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=0x1 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=dualcast 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=software 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@22 -- # accel_module=software 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=32 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=32 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=1 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val=Yes 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:21.861 12:14:15 -- accel/accel.sh@20 -- # val= 00:17:21.861 12:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # IFS=: 00:17:21.861 12:14:15 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.237 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.237 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.237 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.237 ************************************ 00:17:23.237 END TEST accel_dualcast 00:17:23.237 ************************************ 00:17:23.237 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.237 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.237 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:23.237 12:14:16 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:17:23.237 12:14:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:23.237 00:17:23.237 real 0m1.559s 00:17:23.237 user 0m1.331s 00:17:23.237 sys 0m0.129s 00:17:23.237 12:14:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:23.237 12:14:16 -- common/autotest_common.sh@10 -- # set +x 00:17:23.237 12:14:16 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:17:23.237 12:14:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:23.237 12:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:23.237 12:14:16 -- common/autotest_common.sh@10 -- # set +x 00:17:23.237 ************************************ 00:17:23.237 START TEST accel_compare 00:17:23.237 ************************************ 00:17:23.237 12:14:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:17:23.237 12:14:16 -- accel/accel.sh@16 -- # local accel_opc 00:17:23.237 12:14:16 -- accel/accel.sh@17 -- # local accel_module 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.237 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.237 12:14:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:17:23.237 12:14:16 -- accel/accel.sh@12 -- # build_accel_config 00:17:23.237 12:14:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:17:23.237 12:14:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:23.237 12:14:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:23.237 12:14:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:23.237 12:14:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:23.237 12:14:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:23.237 12:14:16 -- accel/accel.sh@40 -- # local IFS=, 00:17:23.237 12:14:16 -- accel/accel.sh@41 -- # jq -r . 00:17:23.237 [2024-04-26 12:14:16.598332] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:23.237 [2024-04-26 12:14:16.599118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61414 ] 00:17:23.495 [2024-04-26 12:14:16.747081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.495 [2024-04-26 12:14:16.881344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val=0x1 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val=compare 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@23 -- # accel_opc=compare 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val=software 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@22 -- # accel_module=software 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val=32 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val=32 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val=1 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.495 12:14:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:23.495 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.495 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.496 12:14:16 -- accel/accel.sh@20 -- # val=Yes 00:17:23.496 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.496 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.496 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:23.496 12:14:16 -- accel/accel.sh@20 -- # val= 00:17:23.496 12:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # IFS=: 00:17:23.496 12:14:16 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:24.871 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:24.871 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:24.871 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:24.871 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:24.871 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:24.871 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:24.871 12:14:18 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:17:24.871 12:14:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:24.871 00:17:24.871 real 0m1.584s 00:17:24.871 user 0m1.362s 00:17:24.871 sys 0m0.121s 00:17:24.871 12:14:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.871 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:17:24.871 ************************************ 00:17:24.871 END TEST accel_compare 00:17:24.871 ************************************ 00:17:24.871 12:14:18 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:17:24.871 12:14:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:24.871 12:14:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.871 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:17:24.871 ************************************ 00:17:24.871 START TEST accel_xor 00:17:24.871 ************************************ 00:17:24.871 12:14:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:17:24.871 12:14:18 -- accel/accel.sh@16 -- # local accel_opc 00:17:24.871 12:14:18 -- accel/accel.sh@17 -- # local accel_module 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:24.871 12:14:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:17:24.871 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:24.871 12:14:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:17:24.871 12:14:18 -- accel/accel.sh@12 -- # build_accel_config 00:17:24.871 12:14:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:24.871 12:14:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:24.871 12:14:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:24.871 12:14:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:24.871 12:14:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:24.871 12:14:18 -- accel/accel.sh@40 -- # local IFS=, 00:17:24.871 12:14:18 -- accel/accel.sh@41 -- # jq -r . 00:17:24.871 [2024-04-26 12:14:18.296312] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:24.871 [2024-04-26 12:14:18.296393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61459 ] 00:17:25.130 [2024-04-26 12:14:18.434081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.130 [2024-04-26 12:14:18.565875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.389 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.389 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.389 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.389 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.389 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.389 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.389 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.389 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.389 12:14:18 -- accel/accel.sh@20 -- # val=0x1 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=xor 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@23 -- # accel_opc=xor 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=2 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=software 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@22 -- # accel_module=software 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=32 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=32 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=1 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val=Yes 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:25.390 12:14:18 -- accel/accel.sh@20 -- # val= 00:17:25.390 12:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # IFS=: 00:17:25.390 12:14:18 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@20 -- # val= 00:17:26.766 12:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@20 -- # val= 00:17:26.766 12:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@20 -- # val= 00:17:26.766 12:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@20 -- # val= 00:17:26.766 12:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@20 -- # val= 00:17:26.766 12:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@20 -- # val= 00:17:26.766 12:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:26.766 12:14:19 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:26.766 12:14:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:26.766 00:17:26.766 real 0m1.549s 00:17:26.766 user 0m1.337s 00:17:26.766 sys 0m0.117s 00:17:26.766 12:14:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:26.766 12:14:19 -- common/autotest_common.sh@10 -- # set +x 00:17:26.766 ************************************ 00:17:26.766 END TEST accel_xor 00:17:26.766 ************************************ 00:17:26.766 12:14:19 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:17:26.766 12:14:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:26.766 12:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.766 12:14:19 -- common/autotest_common.sh@10 -- # set +x 00:17:26.766 ************************************ 00:17:26.766 START TEST accel_xor 00:17:26.766 ************************************ 00:17:26.766 12:14:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:17:26.766 12:14:19 -- accel/accel.sh@16 -- # local accel_opc 00:17:26.766 12:14:19 -- accel/accel.sh@17 -- # local accel_module 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # IFS=: 00:17:26.766 12:14:19 -- accel/accel.sh@19 -- # read -r var val 00:17:26.766 12:14:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:17:26.766 12:14:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:17:26.766 12:14:19 -- accel/accel.sh@12 -- # build_accel_config 00:17:26.766 12:14:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:26.766 12:14:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:26.766 12:14:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:26.766 12:14:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:26.766 12:14:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:26.766 12:14:19 -- accel/accel.sh@40 -- # local IFS=, 00:17:26.766 12:14:19 -- accel/accel.sh@41 -- # jq -r . 00:17:26.766 [2024-04-26 12:14:19.958223] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:26.766 [2024-04-26 12:14:19.958326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:17:26.766 [2024-04-26 12:14:20.098795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.766 [2024-04-26 12:14:20.225288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=0x1 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=xor 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=3 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=software 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@22 -- # accel_module=software 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=32 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=32 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=1 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val=Yes 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:27.025 12:14:20 -- accel/accel.sh@20 -- # val= 00:17:27.025 12:14:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # IFS=: 00:17:27.025 12:14:20 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.413 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.413 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.413 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.413 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.413 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.413 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:28.413 12:14:21 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:28.413 12:14:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:28.413 00:17:28.413 real 0m1.543s 00:17:28.413 user 0m1.329s 00:17:28.413 sys 0m0.115s 00:17:28.413 12:14:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:28.413 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:17:28.413 ************************************ 00:17:28.413 END TEST accel_xor 00:17:28.413 ************************************ 00:17:28.413 12:14:21 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:17:28.413 12:14:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:28.413 12:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.413 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:17:28.413 ************************************ 00:17:28.413 START TEST accel_dif_verify 00:17:28.413 ************************************ 00:17:28.413 12:14:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:17:28.413 12:14:21 -- accel/accel.sh@16 -- # local accel_opc 00:17:28.413 12:14:21 -- accel/accel.sh@17 -- # local accel_module 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.413 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.413 12:14:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:17:28.413 12:14:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:17:28.413 12:14:21 -- accel/accel.sh@12 -- # build_accel_config 00:17:28.413 12:14:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:28.413 12:14:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:28.413 12:14:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:28.413 12:14:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:28.413 12:14:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:28.413 12:14:21 -- accel/accel.sh@40 -- # local IFS=, 00:17:28.413 12:14:21 -- accel/accel.sh@41 -- # jq -r . 00:17:28.413 [2024-04-26 12:14:21.623533] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:28.413 [2024-04-26 12:14:21.623615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61538 ] 00:17:28.413 [2024-04-26 12:14:21.764260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.671 [2024-04-26 12:14:21.893885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.671 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.671 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.671 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.671 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.671 12:14:21 -- accel/accel.sh@20 -- # val=0x1 00:17:28.671 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.671 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.671 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.671 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.671 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.671 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val=dif_verify 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val='512 bytes' 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val='8 bytes' 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val=software 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@22 -- # accel_module=software 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val=32 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val=32 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val=1 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val=No 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:28.672 12:14:21 -- accel/accel.sh@20 -- # val= 00:17:28.672 12:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # IFS=: 00:17:28.672 12:14:21 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.044 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.044 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.044 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.044 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.044 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.044 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:30.044 12:14:23 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:17:30.044 12:14:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:30.044 00:17:30.044 real 0m1.549s 00:17:30.044 user 0m1.338s 00:17:30.044 sys 0m0.120s 00:17:30.044 ************************************ 00:17:30.044 END TEST accel_dif_verify 00:17:30.044 ************************************ 00:17:30.044 12:14:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:30.044 12:14:23 -- common/autotest_common.sh@10 -- # set +x 00:17:30.044 12:14:23 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:17:30.044 12:14:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:30.044 12:14:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:30.044 12:14:23 -- common/autotest_common.sh@10 -- # set +x 00:17:30.044 ************************************ 00:17:30.044 START TEST accel_dif_generate 00:17:30.044 ************************************ 00:17:30.044 12:14:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:17:30.044 12:14:23 -- accel/accel.sh@16 -- # local accel_opc 00:17:30.044 12:14:23 -- accel/accel.sh@17 -- # local accel_module 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.044 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.044 12:14:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:17:30.044 12:14:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:17:30.044 12:14:23 -- accel/accel.sh@12 -- # build_accel_config 00:17:30.044 12:14:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:30.044 12:14:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:30.044 12:14:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:30.044 12:14:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:30.044 12:14:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:30.044 12:14:23 -- accel/accel.sh@40 -- # local IFS=, 00:17:30.044 12:14:23 -- accel/accel.sh@41 -- # jq -r . 00:17:30.044 [2024-04-26 12:14:23.283999] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:30.044 [2024-04-26 12:14:23.284093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:17:30.044 [2024-04-26 12:14:23.414600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.303 [2024-04-26 12:14:23.552388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=0x1 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=dif_generate 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val='512 bytes' 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val='8 bytes' 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=software 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@22 -- # accel_module=software 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=32 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=32 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=1 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val=No 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:30.303 12:14:23 -- accel/accel.sh@20 -- # val= 00:17:30.303 12:14:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # IFS=: 00:17:30.303 12:14:23 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@20 -- # val= 00:17:31.681 12:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@20 -- # val= 00:17:31.681 12:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@20 -- # val= 00:17:31.681 12:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@20 -- # val= 00:17:31.681 12:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@20 -- # val= 00:17:31.681 12:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@20 -- # val= 00:17:31.681 12:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:31.681 12:14:24 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:17:31.681 12:14:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:31.681 00:17:31.681 real 0m1.550s 00:17:31.681 user 0m1.336s 00:17:31.681 sys 0m0.116s 00:17:31.681 12:14:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.681 ************************************ 00:17:31.681 END TEST accel_dif_generate 00:17:31.681 ************************************ 00:17:31.681 12:14:24 -- common/autotest_common.sh@10 -- # set +x 00:17:31.681 12:14:24 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:17:31.681 12:14:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:31.681 12:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.681 12:14:24 -- common/autotest_common.sh@10 -- # set +x 00:17:31.681 ************************************ 00:17:31.681 START TEST accel_dif_generate_copy 00:17:31.681 ************************************ 00:17:31.681 12:14:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:17:31.681 12:14:24 -- accel/accel.sh@16 -- # local accel_opc 00:17:31.681 12:14:24 -- accel/accel.sh@17 -- # local accel_module 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # IFS=: 00:17:31.681 12:14:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:17:31.681 12:14:24 -- accel/accel.sh@19 -- # read -r var val 00:17:31.681 12:14:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:17:31.681 12:14:24 -- accel/accel.sh@12 -- # build_accel_config 00:17:31.681 12:14:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:31.681 12:14:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:31.682 12:14:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:31.682 12:14:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:31.682 12:14:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:31.682 12:14:24 -- accel/accel.sh@40 -- # local IFS=, 00:17:31.682 12:14:24 -- accel/accel.sh@41 -- # jq -r . 00:17:31.682 [2024-04-26 12:14:24.949446] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:31.682 [2024-04-26 12:14:24.949552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61616 ] 00:17:31.682 [2024-04-26 12:14:25.088932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.940 [2024-04-26 12:14:25.207951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=0x1 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=software 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@22 -- # accel_module=software 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=32 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=32 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=1 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val=No 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.940 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.940 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.940 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.941 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.941 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:31.941 12:14:25 -- accel/accel.sh@20 -- # val= 00:17:31.941 12:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:31.941 12:14:25 -- accel/accel.sh@19 -- # IFS=: 00:17:31.941 12:14:25 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.313 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.313 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.313 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.313 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.313 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.313 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:33.313 12:14:26 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:17:33.313 12:14:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:33.313 00:17:33.313 real 0m1.527s 00:17:33.313 user 0m1.325s 00:17:33.313 sys 0m0.109s 00:17:33.313 12:14:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.313 ************************************ 00:17:33.313 END TEST accel_dif_generate_copy 00:17:33.313 ************************************ 00:17:33.313 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:17:33.313 12:14:26 -- accel/accel.sh@115 -- # [[ y == y ]] 00:17:33.313 12:14:26 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:33.313 12:14:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:33.313 12:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.313 12:14:26 -- common/autotest_common.sh@10 -- # set +x 00:17:33.313 ************************************ 00:17:33.313 START TEST accel_comp 00:17:33.313 ************************************ 00:17:33.313 12:14:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:33.313 12:14:26 -- accel/accel.sh@16 -- # local accel_opc 00:17:33.313 12:14:26 -- accel/accel.sh@17 -- # local accel_module 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.313 12:14:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:33.313 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.313 12:14:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:33.313 12:14:26 -- accel/accel.sh@12 -- # build_accel_config 00:17:33.313 12:14:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:33.313 12:14:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:33.313 12:14:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:33.313 12:14:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:33.313 12:14:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:33.313 12:14:26 -- accel/accel.sh@40 -- # local IFS=, 00:17:33.313 12:14:26 -- accel/accel.sh@41 -- # jq -r . 00:17:33.313 [2024-04-26 12:14:26.595815] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:33.313 [2024-04-26 12:14:26.595896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61656 ] 00:17:33.313 [2024-04-26 12:14:26.728232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.571 [2024-04-26 12:14:26.843796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=0x1 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=compress 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@23 -- # accel_opc=compress 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=software 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@22 -- # accel_module=software 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=32 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=32 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=1 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val=No 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:33.571 12:14:26 -- accel/accel.sh@20 -- # val= 00:17:33.571 12:14:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # IFS=: 00:17:33.571 12:14:26 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:34.950 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:34.950 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:34.950 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:34.950 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:34.950 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:34.950 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:34.950 12:14:28 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:17:34.950 12:14:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:34.950 00:17:34.950 real 0m1.521s 00:17:34.950 user 0m1.322s 00:17:34.950 sys 0m0.108s 00:17:34.950 12:14:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.950 ************************************ 00:17:34.950 END TEST accel_comp 00:17:34.950 ************************************ 00:17:34.950 12:14:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.950 12:14:28 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:34.950 12:14:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:34.950 12:14:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.950 12:14:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.950 ************************************ 00:17:34.950 START TEST accel_decomp 00:17:34.950 ************************************ 00:17:34.950 12:14:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:34.950 12:14:28 -- accel/accel.sh@16 -- # local accel_opc 00:17:34.950 12:14:28 -- accel/accel.sh@17 -- # local accel_module 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:34.950 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:34.950 12:14:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:34.950 12:14:28 -- accel/accel.sh@12 -- # build_accel_config 00:17:34.950 12:14:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:34.950 12:14:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:34.950 12:14:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:34.950 12:14:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:34.950 12:14:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:34.950 12:14:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:34.950 12:14:28 -- accel/accel.sh@40 -- # local IFS=, 00:17:34.950 12:14:28 -- accel/accel.sh@41 -- # jq -r . 00:17:34.950 [2024-04-26 12:14:28.237609] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:34.950 [2024-04-26 12:14:28.237767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61700 ] 00:17:34.950 [2024-04-26 12:14:28.386241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.209 [2024-04-26 12:14:28.500816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=0x1 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=decompress 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=software 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@22 -- # accel_module=software 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=32 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=32 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=1 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val=Yes 00:17:35.209 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.209 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.209 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.210 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.210 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.210 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:35.210 12:14:28 -- accel/accel.sh@20 -- # val= 00:17:35.210 12:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:35.210 12:14:28 -- accel/accel.sh@19 -- # IFS=: 00:17:35.210 12:14:28 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@20 -- # val= 00:17:36.586 12:14:29 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@20 -- # val= 00:17:36.586 12:14:29 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@20 -- # val= 00:17:36.586 12:14:29 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@20 -- # val= 00:17:36.586 12:14:29 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@20 -- # val= 00:17:36.586 12:14:29 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@20 -- # val= 00:17:36.586 12:14:29 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:36.586 12:14:29 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:36.586 12:14:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:36.586 00:17:36.586 real 0m1.555s 00:17:36.586 user 0m1.339s 00:17:36.586 sys 0m0.120s 00:17:36.586 12:14:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.586 12:14:29 -- common/autotest_common.sh@10 -- # set +x 00:17:36.586 ************************************ 00:17:36.586 END TEST accel_decomp 00:17:36.586 ************************************ 00:17:36.586 12:14:29 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:36.586 12:14:29 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:36.586 12:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.586 12:14:29 -- common/autotest_common.sh@10 -- # set +x 00:17:36.586 ************************************ 00:17:36.586 START TEST accel_decmop_full 00:17:36.586 ************************************ 00:17:36.586 12:14:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:36.586 12:14:29 -- accel/accel.sh@16 -- # local accel_opc 00:17:36.586 12:14:29 -- accel/accel.sh@17 -- # local accel_module 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # IFS=: 00:17:36.586 12:14:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:36.586 12:14:29 -- accel/accel.sh@19 -- # read -r var val 00:17:36.586 12:14:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:36.586 12:14:29 -- accel/accel.sh@12 -- # build_accel_config 00:17:36.586 12:14:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:36.586 12:14:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:36.586 12:14:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:36.586 12:14:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:36.587 12:14:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:36.587 12:14:29 -- accel/accel.sh@40 -- # local IFS=, 00:17:36.587 12:14:29 -- accel/accel.sh@41 -- # jq -r . 00:17:36.587 [2024-04-26 12:14:29.892673] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:36.587 [2024-04-26 12:14:29.892806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ] 00:17:36.587 [2024-04-26 12:14:30.031238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.845 [2024-04-26 12:14:30.176424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.845 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.845 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.845 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.845 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.845 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.845 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.845 12:14:30 -- accel/accel.sh@20 -- # val=0x1 00:17:36.845 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.845 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.845 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.845 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.845 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.845 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=decompress 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=software 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@22 -- # accel_module=software 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=32 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=32 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=1 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val=Yes 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:36.846 12:14:30 -- accel/accel.sh@20 -- # val= 00:17:36.846 12:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # IFS=: 00:17:36.846 12:14:30 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.222 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.222 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.222 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.222 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.222 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.222 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:38.222 12:14:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:38.222 12:14:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:38.222 00:17:38.222 real 0m1.574s 00:17:38.222 user 0m0.016s 00:17:38.222 sys 0m0.001s 00:17:38.222 12:14:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:38.222 12:14:31 -- common/autotest_common.sh@10 -- # set +x 00:17:38.222 ************************************ 00:17:38.222 END TEST accel_decmop_full 00:17:38.222 ************************************ 00:17:38.222 12:14:31 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:38.222 12:14:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:38.222 12:14:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.222 12:14:31 -- common/autotest_common.sh@10 -- # set +x 00:17:38.222 ************************************ 00:17:38.222 START TEST accel_decomp_mcore 00:17:38.222 ************************************ 00:17:38.222 12:14:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:38.222 12:14:31 -- accel/accel.sh@16 -- # local accel_opc 00:17:38.222 12:14:31 -- accel/accel.sh@17 -- # local accel_module 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.222 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.222 12:14:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:38.222 12:14:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:38.222 12:14:31 -- accel/accel.sh@12 -- # build_accel_config 00:17:38.222 12:14:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:38.222 12:14:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:38.222 12:14:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:38.222 12:14:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:38.222 12:14:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:38.222 12:14:31 -- accel/accel.sh@40 -- # local IFS=, 00:17:38.222 12:14:31 -- accel/accel.sh@41 -- # jq -r . 00:17:38.222 [2024-04-26 12:14:31.574393] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:38.222 [2024-04-26 12:14:31.574522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:17:38.481 [2024-04-26 12:14:31.718132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.481 [2024-04-26 12:14:31.844220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.481 [2024-04-26 12:14:31.844309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.481 [2024-04-26 12:14:31.844378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.481 [2024-04-26 12:14:31.844386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=0xf 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=decompress 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=software 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@22 -- # accel_module=software 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=32 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=32 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=1 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val=Yes 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:38.481 12:14:31 -- accel/accel.sh@20 -- # val= 00:17:38.481 12:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # IFS=: 00:17:38.481 12:14:31 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:39.858 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:39.858 12:14:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:39.858 12:14:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:39.858 00:17:39.858 real 0m1.558s 00:17:39.858 user 0m4.722s 00:17:39.858 sys 0m0.137s 00:17:39.858 12:14:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:39.858 12:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 ************************************ 00:17:39.858 END TEST accel_decomp_mcore 00:17:39.858 ************************************ 00:17:39.858 12:14:33 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:39.858 12:14:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:39.858 12:14:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.858 12:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 ************************************ 00:17:39.858 START TEST accel_decomp_full_mcore 00:17:39.858 ************************************ 00:17:39.858 12:14:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:39.858 12:14:33 -- accel/accel.sh@16 -- # local accel_opc 00:17:39.858 12:14:33 -- accel/accel.sh@17 -- # local accel_module 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:39.858 12:14:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:39.858 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:39.858 12:14:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:39.858 12:14:33 -- accel/accel.sh@12 -- # build_accel_config 00:17:39.858 12:14:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:39.858 12:14:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:39.858 12:14:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:39.858 12:14:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:39.858 12:14:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:39.858 12:14:33 -- accel/accel.sh@40 -- # local IFS=, 00:17:39.858 12:14:33 -- accel/accel.sh@41 -- # jq -r . 00:17:39.858 [2024-04-26 12:14:33.238618] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:39.858 [2024-04-26 12:14:33.238699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61825 ] 00:17:40.117 [2024-04-26 12:14:33.373724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.117 [2024-04-26 12:14:33.492051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.117 [2024-04-26 12:14:33.492234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.117 [2024-04-26 12:14:33.492471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.117 [2024-04-26 12:14:33.492475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=0xf 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=decompress 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=software 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@22 -- # accel_module=software 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=32 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=32 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=1 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val=Yes 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:40.117 12:14:33 -- accel/accel.sh@20 -- # val= 00:17:40.117 12:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # IFS=: 00:17:40.117 12:14:33 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.521 12:14:34 -- accel/accel.sh@20 -- # val= 00:17:41.521 12:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.521 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.522 12:14:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:41.522 12:14:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:41.522 12:14:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:41.522 00:17:41.522 real 0m1.546s 00:17:41.522 user 0m4.768s 00:17:41.522 sys 0m0.124s 00:17:41.522 12:14:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:41.522 12:14:34 -- common/autotest_common.sh@10 -- # set +x 00:17:41.522 ************************************ 00:17:41.522 END TEST accel_decomp_full_mcore 00:17:41.522 ************************************ 00:17:41.522 12:14:34 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:41.522 12:14:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:41.522 12:14:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:41.522 12:14:34 -- common/autotest_common.sh@10 -- # set +x 00:17:41.522 ************************************ 00:17:41.522 START TEST accel_decomp_mthread 00:17:41.522 ************************************ 00:17:41.522 12:14:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:41.522 12:14:34 -- accel/accel.sh@16 -- # local accel_opc 00:17:41.522 12:14:34 -- accel/accel.sh@17 -- # local accel_module 00:17:41.522 12:14:34 -- accel/accel.sh@19 -- # IFS=: 00:17:41.522 12:14:34 -- accel/accel.sh@19 -- # read -r var val 00:17:41.522 12:14:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:41.522 12:14:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:41.522 12:14:34 -- accel/accel.sh@12 -- # build_accel_config 00:17:41.522 12:14:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:41.522 12:14:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:41.522 12:14:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:41.522 12:14:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:41.522 12:14:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:41.522 12:14:34 -- accel/accel.sh@40 -- # local IFS=, 00:17:41.522 12:14:34 -- accel/accel.sh@41 -- # jq -r . 00:17:41.522 [2024-04-26 12:14:34.903975] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:41.522 [2024-04-26 12:14:34.904103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61862 ] 00:17:41.780 [2024-04-26 12:14:35.046538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.780 [2024-04-26 12:14:35.186890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.780 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:41.780 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.780 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:41.780 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:41.780 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:41.780 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.780 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:41.780 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:41.780 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:41.780 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:41.780 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:41.780 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.039 12:14:35 -- accel/accel.sh@20 -- # val=0x1 00:17:42.039 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.039 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.039 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.039 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:42.039 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.039 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.039 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.039 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:42.039 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.039 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.039 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=decompress 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=software 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@22 -- # accel_module=software 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=32 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=32 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=2 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val=Yes 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:42.040 12:14:35 -- accel/accel.sh@20 -- # val= 00:17:42.040 12:14:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # IFS=: 00:17:42.040 12:14:35 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.014 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.014 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.014 12:14:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:43.014 12:14:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:43.014 12:14:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:43.014 00:17:43.014 real 0m1.571s 00:17:43.014 user 0m1.347s 00:17:43.014 sys 0m0.134s 00:17:43.014 12:14:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:43.014 ************************************ 00:17:43.014 END TEST accel_decomp_mthread 00:17:43.014 12:14:36 -- common/autotest_common.sh@10 -- # set +x 00:17:43.014 ************************************ 00:17:43.014 12:14:36 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:43.014 12:14:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:43.014 12:14:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.014 12:14:36 -- common/autotest_common.sh@10 -- # set +x 00:17:43.369 ************************************ 00:17:43.369 START TEST accel_deomp_full_mthread 00:17:43.369 ************************************ 00:17:43.369 12:14:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:43.370 12:14:36 -- accel/accel.sh@16 -- # local accel_opc 00:17:43.370 12:14:36 -- accel/accel.sh@17 -- # local accel_module 00:17:43.370 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.370 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.370 12:14:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:43.370 12:14:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:43.370 12:14:36 -- accel/accel.sh@12 -- # build_accel_config 00:17:43.370 12:14:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:43.370 12:14:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:43.370 12:14:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:43.370 12:14:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:43.370 12:14:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:43.370 12:14:36 -- accel/accel.sh@40 -- # local IFS=, 00:17:43.370 12:14:36 -- accel/accel.sh@41 -- # jq -r . 00:17:43.370 [2024-04-26 12:14:36.577558] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:43.370 [2024-04-26 12:14:36.577646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61906 ] 00:17:43.370 [2024-04-26 12:14:36.708072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.702 [2024-04-26 12:14:36.827392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=0x1 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=decompress 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=software 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@22 -- # accel_module=software 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=32 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=32 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=2 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val=Yes 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:43.702 12:14:36 -- accel/accel.sh@20 -- # val= 00:17:43.702 12:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # IFS=: 00:17:43.702 12:14:36 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@20 -- # val= 00:17:45.079 12:14:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # IFS=: 00:17:45.079 12:14:38 -- accel/accel.sh@19 -- # read -r var val 00:17:45.079 12:14:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:45.079 12:14:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:45.079 12:14:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:45.079 00:17:45.079 real 0m1.563s 00:17:45.079 user 0m1.356s 00:17:45.079 sys 0m0.112s 00:17:45.079 12:14:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:45.079 ************************************ 00:17:45.079 END TEST accel_deomp_full_mthread 00:17:45.079 12:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:45.079 ************************************ 00:17:45.079 12:14:38 -- accel/accel.sh@124 -- # [[ n == y ]] 00:17:45.079 12:14:38 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:17:45.079 12:14:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:45.079 12:14:38 -- accel/accel.sh@137 -- # build_accel_config 00:17:45.079 12:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.079 12:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:45.079 12:14:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:45.079 12:14:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:45.079 12:14:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:45.079 12:14:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:45.079 12:14:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:45.079 12:14:38 -- accel/accel.sh@40 -- # local IFS=, 00:17:45.079 12:14:38 -- accel/accel.sh@41 -- # jq -r . 00:17:45.079 ************************************ 00:17:45.079 START TEST accel_dif_functional_tests 00:17:45.079 ************************************ 00:17:45.079 12:14:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:17:45.079 [2024-04-26 12:14:38.284525] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:45.079 [2024-04-26 12:14:38.284624] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:17:45.079 [2024-04-26 12:14:38.425007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:45.346 [2024-04-26 12:14:38.553921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.346 [2024-04-26 12:14:38.554031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.346 [2024-04-26 12:14:38.554040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.346 00:17:45.346 00:17:45.346 CUnit - A unit testing framework for C - Version 2.1-3 00:17:45.346 http://cunit.sourceforge.net/ 00:17:45.346 00:17:45.346 00:17:45.346 Suite: accel_dif 00:17:45.346 Test: verify: DIF generated, GUARD check ...passed 00:17:45.346 Test: verify: DIF generated, APPTAG check ...passed 00:17:45.346 Test: verify: DIF generated, REFTAG check ...passed 00:17:45.346 Test: verify: DIF not generated, GUARD check ...passed 00:17:45.346 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 12:14:38.648897] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:45.346 [2024-04-26 12:14:38.648970] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:45.346 [2024-04-26 12:14:38.649004] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:45.346 [2024-04-26 12:14:38.649031] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:45.346 passed 00:17:45.346 Test: verify: DIF not generated, REFTAG check ...passed 00:17:45.346 Test: verify: APPTAG correct, APPTAG check ...passed 00:17:45.346 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:17:45.346 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-04-26 12:14:38.649054] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:45.347 [2024-04-26 12:14:38.649080] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:45.347 [2024-04-26 12:14:38.649133] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:17:45.347 passed 00:17:45.347 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:17:45.347 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:17:45.347 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:17:45.347 Test: generate copy: DIF generated, GUARD check ...[2024-04-26 12:14:38.649287] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:17:45.347 passed 00:17:45.347 Test: generate copy: DIF generated, APTTAG check ...passed 00:17:45.347 Test: generate copy: DIF generated, REFTAG check ...passed 00:17:45.347 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:17:45.347 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:17:45.347 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:17:45.347 Test: generate copy: iovecs-len validate ...[2024-04-26 12:14:38.649533] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:17:45.347 passed 00:17:45.347 Test: generate copy: buffer alignment validate ...passed 00:17:45.347 00:17:45.347 Run Summary: Type Total Ran Passed Failed Inactive 00:17:45.347 suites 1 1 n/a 0 0 00:17:45.347 tests 20 20 20 0 0 00:17:45.347 asserts 204 204 204 0 n/a 00:17:45.347 00:17:45.347 Elapsed time = 0.002 seconds 00:17:45.631 00:17:45.631 real 0m0.654s 00:17:45.631 user 0m0.814s 00:17:45.631 sys 0m0.155s 00:17:45.631 12:14:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:45.631 12:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 ************************************ 00:17:45.631 END TEST accel_dif_functional_tests 00:17:45.631 ************************************ 00:17:45.631 00:17:45.631 real 0m37.712s 00:17:45.631 user 0m38.316s 00:17:45.631 sys 0m4.773s 00:17:45.631 12:14:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:45.631 12:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 ************************************ 00:17:45.631 END TEST accel 00:17:45.631 ************************************ 00:17:45.631 12:14:38 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:45.631 12:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:45.631 12:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.631 12:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 ************************************ 00:17:45.631 START TEST accel_rpc 00:17:45.631 ************************************ 00:17:45.631 12:14:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:45.890 * Looking for test storage... 00:17:45.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:45.890 12:14:39 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:45.890 12:14:39 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62015 00:17:45.890 12:14:39 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:45.890 12:14:39 -- accel/accel_rpc.sh@15 -- # waitforlisten 62015 00:17:45.890 12:14:39 -- common/autotest_common.sh@817 -- # '[' -z 62015 ']' 00:17:45.890 12:14:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.890 12:14:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.890 12:14:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.890 12:14:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.890 12:14:39 -- common/autotest_common.sh@10 -- # set +x 00:17:45.890 [2024-04-26 12:14:39.177652] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:45.890 [2024-04-26 12:14:39.177753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:17:45.890 [2024-04-26 12:14:39.314836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.148 [2024-04-26 12:14:39.434865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.715 12:14:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.715 12:14:40 -- common/autotest_common.sh@850 -- # return 0 00:17:46.715 12:14:40 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:17:46.715 12:14:40 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:17:46.715 12:14:40 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:17:46.715 12:14:40 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:17:46.715 12:14:40 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:17:46.715 12:14:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:46.715 12:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.715 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:46.974 ************************************ 00:17:46.974 START TEST accel_assign_opcode 00:17:46.974 ************************************ 00:17:46.974 12:14:40 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:17:46.974 12:14:40 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:17:46.974 12:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.974 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:46.974 [2024-04-26 12:14:40.227456] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:17:46.974 12:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.974 12:14:40 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:17:46.974 12:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.974 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:46.974 [2024-04-26 12:14:40.235440] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:17:46.974 12:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.974 12:14:40 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:17:46.974 12:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.974 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.233 12:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.233 12:14:40 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:17:47.233 12:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.233 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.233 12:14:40 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:17:47.233 12:14:40 -- accel/accel_rpc.sh@42 -- # grep software 00:17:47.233 12:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.233 software 00:17:47.233 00:17:47.233 real 0m0.299s 00:17:47.233 user 0m0.053s 00:17:47.233 sys 0m0.014s 00:17:47.233 12:14:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:47.233 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.233 ************************************ 00:17:47.233 END TEST accel_assign_opcode 00:17:47.233 ************************************ 00:17:47.233 12:14:40 -- accel/accel_rpc.sh@55 -- # killprocess 62015 00:17:47.233 12:14:40 -- common/autotest_common.sh@936 -- # '[' -z 62015 ']' 00:17:47.233 12:14:40 -- common/autotest_common.sh@940 -- # kill -0 62015 00:17:47.233 12:14:40 -- common/autotest_common.sh@941 -- # uname 00:17:47.233 12:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.233 12:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62015 00:17:47.233 12:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:47.233 12:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:47.233 killing process with pid 62015 00:17:47.233 12:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62015' 00:17:47.233 12:14:40 -- common/autotest_common.sh@955 -- # kill 62015 00:17:47.233 12:14:40 -- common/autotest_common.sh@960 -- # wait 62015 00:17:47.800 00:17:47.800 real 0m1.957s 00:17:47.800 user 0m2.081s 00:17:47.800 sys 0m0.444s 00:17:47.800 12:14:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:47.800 12:14:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.800 ************************************ 00:17:47.800 END TEST accel_rpc 00:17:47.800 ************************************ 00:17:47.800 12:14:41 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:47.800 12:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:47.800 12:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.800 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:47.800 ************************************ 00:17:47.800 START TEST app_cmdline 00:17:47.800 ************************************ 00:17:47.800 12:14:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:47.800 * Looking for test storage... 00:17:47.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:47.800 12:14:41 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:47.800 12:14:41 -- app/cmdline.sh@17 -- # spdk_tgt_pid=62118 00:17:47.800 12:14:41 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:47.800 12:14:41 -- app/cmdline.sh@18 -- # waitforlisten 62118 00:17:47.800 12:14:41 -- common/autotest_common.sh@817 -- # '[' -z 62118 ']' 00:17:47.800 12:14:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.800 12:14:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.800 12:14:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.800 12:14:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.800 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:47.800 [2024-04-26 12:14:41.243881] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:47.800 [2024-04-26 12:14:41.244610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:17:48.058 [2024-04-26 12:14:41.380327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.058 [2024-04-26 12:14:41.494061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.994 12:14:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.994 12:14:42 -- common/autotest_common.sh@850 -- # return 0 00:17:48.994 12:14:42 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:48.994 { 00:17:48.994 "version": "SPDK v24.05-pre git sha1 e29339c01", 00:17:48.994 "fields": { 00:17:48.994 "major": 24, 00:17:48.994 "minor": 5, 00:17:48.994 "patch": 0, 00:17:48.994 "suffix": "-pre", 00:17:48.994 "commit": "e29339c01" 00:17:48.994 } 00:17:48.994 } 00:17:48.994 12:14:42 -- app/cmdline.sh@22 -- # expected_methods=() 00:17:48.994 12:14:42 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:48.994 12:14:42 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:48.994 12:14:42 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:48.994 12:14:42 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:48.994 12:14:42 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:48.994 12:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.994 12:14:42 -- app/cmdline.sh@26 -- # sort 00:17:48.994 12:14:42 -- common/autotest_common.sh@10 -- # set +x 00:17:48.994 12:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.252 12:14:42 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:49.252 12:14:42 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:49.252 12:14:42 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:49.252 12:14:42 -- common/autotest_common.sh@638 -- # local es=0 00:17:49.252 12:14:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:49.252 12:14:42 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.252 12:14:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:49.252 12:14:42 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.252 12:14:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:49.252 12:14:42 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.252 12:14:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:49.252 12:14:42 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.252 12:14:42 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:49.252 12:14:42 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:49.511 request: 00:17:49.511 { 00:17:49.511 "method": "env_dpdk_get_mem_stats", 00:17:49.511 "req_id": 1 00:17:49.511 } 00:17:49.511 Got JSON-RPC error response 00:17:49.511 response: 00:17:49.511 { 00:17:49.511 "code": -32601, 00:17:49.511 "message": "Method not found" 00:17:49.511 } 00:17:49.511 12:14:42 -- common/autotest_common.sh@641 -- # es=1 00:17:49.511 12:14:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:49.511 12:14:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:49.511 12:14:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:49.511 12:14:42 -- app/cmdline.sh@1 -- # killprocess 62118 00:17:49.511 12:14:42 -- common/autotest_common.sh@936 -- # '[' -z 62118 ']' 00:17:49.511 12:14:42 -- common/autotest_common.sh@940 -- # kill -0 62118 00:17:49.511 12:14:42 -- common/autotest_common.sh@941 -- # uname 00:17:49.511 12:14:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.511 12:14:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62118 00:17:49.511 12:14:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.511 killing process with pid 62118 00:17:49.511 12:14:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.511 12:14:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62118' 00:17:49.511 12:14:42 -- common/autotest_common.sh@955 -- # kill 62118 00:17:49.511 12:14:42 -- common/autotest_common.sh@960 -- # wait 62118 00:17:49.769 00:17:49.769 real 0m2.088s 00:17:49.769 user 0m2.565s 00:17:49.769 sys 0m0.461s 00:17:49.769 12:14:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:49.770 12:14:43 -- common/autotest_common.sh@10 -- # set +x 00:17:49.770 ************************************ 00:17:49.770 END TEST app_cmdline 00:17:49.770 ************************************ 00:17:50.028 12:14:43 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:50.028 12:14:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:50.028 12:14:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.028 12:14:43 -- common/autotest_common.sh@10 -- # set +x 00:17:50.028 ************************************ 00:17:50.028 START TEST version 00:17:50.028 ************************************ 00:17:50.028 12:14:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:50.028 * Looking for test storage... 00:17:50.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:50.028 12:14:43 -- app/version.sh@17 -- # get_header_version major 00:17:50.028 12:14:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:50.028 12:14:43 -- app/version.sh@14 -- # cut -f2 00:17:50.028 12:14:43 -- app/version.sh@14 -- # tr -d '"' 00:17:50.028 12:14:43 -- app/version.sh@17 -- # major=24 00:17:50.028 12:14:43 -- app/version.sh@18 -- # get_header_version minor 00:17:50.028 12:14:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:50.028 12:14:43 -- app/version.sh@14 -- # cut -f2 00:17:50.028 12:14:43 -- app/version.sh@14 -- # tr -d '"' 00:17:50.028 12:14:43 -- app/version.sh@18 -- # minor=5 00:17:50.028 12:14:43 -- app/version.sh@19 -- # get_header_version patch 00:17:50.028 12:14:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:50.028 12:14:43 -- app/version.sh@14 -- # cut -f2 00:17:50.028 12:14:43 -- app/version.sh@14 -- # tr -d '"' 00:17:50.028 12:14:43 -- app/version.sh@19 -- # patch=0 00:17:50.028 12:14:43 -- app/version.sh@20 -- # get_header_version suffix 00:17:50.028 12:14:43 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:50.028 12:14:43 -- app/version.sh@14 -- # cut -f2 00:17:50.028 12:14:43 -- app/version.sh@14 -- # tr -d '"' 00:17:50.028 12:14:43 -- app/version.sh@20 -- # suffix=-pre 00:17:50.028 12:14:43 -- app/version.sh@22 -- # version=24.5 00:17:50.028 12:14:43 -- app/version.sh@25 -- # (( patch != 0 )) 00:17:50.028 12:14:43 -- app/version.sh@28 -- # version=24.5rc0 00:17:50.029 12:14:43 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:50.029 12:14:43 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:50.029 12:14:43 -- app/version.sh@30 -- # py_version=24.5rc0 00:17:50.029 12:14:43 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:17:50.029 00:17:50.029 real 0m0.150s 00:17:50.029 user 0m0.089s 00:17:50.029 sys 0m0.092s 00:17:50.029 12:14:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:50.029 12:14:43 -- common/autotest_common.sh@10 -- # set +x 00:17:50.029 ************************************ 00:17:50.029 END TEST version 00:17:50.029 ************************************ 00:17:50.287 12:14:43 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:17:50.287 12:14:43 -- spdk/autotest.sh@194 -- # uname -s 00:17:50.287 12:14:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:50.287 12:14:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:50.287 12:14:43 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:17:50.287 12:14:43 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:17:50.287 12:14:43 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:17:50.287 12:14:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:50.287 12:14:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.287 12:14:43 -- common/autotest_common.sh@10 -- # set +x 00:17:50.287 ************************************ 00:17:50.287 START TEST spdk_dd 00:17:50.287 ************************************ 00:17:50.287 12:14:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:17:50.288 * Looking for test storage... 00:17:50.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:50.288 12:14:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.288 12:14:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.288 12:14:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.288 12:14:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.288 12:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.288 12:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.288 12:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.288 12:14:43 -- paths/export.sh@5 -- # export PATH 00:17:50.288 12:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.288 12:14:43 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:50.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.546 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:50.546 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:50.806 12:14:44 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:17:50.806 12:14:44 -- dd/dd.sh@11 -- # nvme_in_userspace 00:17:50.806 12:14:44 -- scripts/common.sh@309 -- # local bdf bdfs 00:17:50.806 12:14:44 -- scripts/common.sh@310 -- # local nvmes 00:17:50.806 12:14:44 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:17:50.806 12:14:44 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:17:50.806 12:14:44 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:17:50.806 12:14:44 -- scripts/common.sh@295 -- # local bdf= 00:17:50.806 12:14:44 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:17:50.806 12:14:44 -- scripts/common.sh@230 -- # local class 00:17:50.806 12:14:44 -- scripts/common.sh@231 -- # local subclass 00:17:50.806 12:14:44 -- scripts/common.sh@232 -- # local progif 00:17:50.806 12:14:44 -- scripts/common.sh@233 -- # printf %02x 1 00:17:50.806 12:14:44 -- scripts/common.sh@233 -- # class=01 00:17:50.806 12:14:44 -- scripts/common.sh@234 -- # printf %02x 8 00:17:50.806 12:14:44 -- scripts/common.sh@234 -- # subclass=08 00:17:50.806 12:14:44 -- scripts/common.sh@235 -- # printf %02x 2 00:17:50.806 12:14:44 -- scripts/common.sh@235 -- # progif=02 00:17:50.806 12:14:44 -- scripts/common.sh@237 -- # hash lspci 00:17:50.806 12:14:44 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:17:50.806 12:14:44 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:17:50.806 12:14:44 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:17:50.806 12:14:44 -- scripts/common.sh@240 -- # grep -i -- -p02 00:17:50.806 12:14:44 -- scripts/common.sh@242 -- # tr -d '"' 00:17:50.806 12:14:44 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:50.806 12:14:44 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:17:50.806 12:14:44 -- scripts/common.sh@15 -- # local i 00:17:50.806 12:14:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:17:50.806 12:14:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:50.806 12:14:44 -- scripts/common.sh@24 -- # return 0 00:17:50.806 12:14:44 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:17:50.806 12:14:44 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:50.806 12:14:44 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:17:50.806 12:14:44 -- scripts/common.sh@15 -- # local i 00:17:50.806 12:14:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:17:50.806 12:14:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:50.806 12:14:44 -- scripts/common.sh@24 -- # return 0 00:17:50.806 12:14:44 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:17:50.806 12:14:44 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:17:50.806 12:14:44 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:17:50.806 12:14:44 -- scripts/common.sh@320 -- # uname -s 00:17:50.806 12:14:44 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:17:50.806 12:14:44 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:17:50.806 12:14:44 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:17:50.806 12:14:44 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:17:50.806 12:14:44 -- scripts/common.sh@320 -- # uname -s 00:17:50.806 12:14:44 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:17:50.806 12:14:44 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:17:50.806 12:14:44 -- scripts/common.sh@325 -- # (( 2 )) 00:17:50.806 12:14:44 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:50.806 12:14:44 -- dd/dd.sh@13 -- # check_liburing 00:17:50.806 12:14:44 -- dd/common.sh@139 -- # local lib so 00:17:50.806 12:14:44 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:17:50.806 12:14:44 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:17:50.806 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.806 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:50.807 12:14:44 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:17:50.807 12:14:44 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:17:50.807 * spdk_dd linked to liburing 00:17:50.807 12:14:44 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:17:50.808 12:14:44 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:17:50.808 12:14:44 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:17:50.808 12:14:44 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:17:50.808 12:14:44 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:50.808 12:14:44 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:17:50.808 12:14:44 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:17:50.808 12:14:44 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:17:50.808 12:14:44 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:17:50.808 12:14:44 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:17:50.808 12:14:44 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:17:50.808 12:14:44 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:17:50.808 12:14:44 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:17:50.808 12:14:44 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:17:50.808 12:14:44 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:17:50.808 12:14:44 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:50.808 12:14:44 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:17:50.808 12:14:44 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:17:50.808 12:14:44 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:50.808 12:14:44 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:17:50.808 12:14:44 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:17:50.808 12:14:44 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:17:50.808 12:14:44 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:50.808 12:14:44 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:17:50.808 12:14:44 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:17:50.808 12:14:44 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:17:50.808 12:14:44 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:50.808 12:14:44 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:17:50.808 12:14:44 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:17:50.808 12:14:44 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:17:50.808 12:14:44 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:17:50.808 12:14:44 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:17:50.808 12:14:44 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:17:50.808 12:14:44 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:17:50.808 12:14:44 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:17:50.808 12:14:44 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:17:50.808 12:14:44 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:17:50.808 12:14:44 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:17:50.808 12:14:44 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:17:50.808 12:14:44 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:17:50.808 12:14:44 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:17:50.808 12:14:44 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:17:50.808 12:14:44 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:17:50.808 12:14:44 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:17:50.808 12:14:44 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:50.808 12:14:44 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:17:50.808 12:14:44 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:17:50.808 12:14:44 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:17:50.808 12:14:44 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:17:50.808 12:14:44 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:17:50.808 12:14:44 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:17:50.808 12:14:44 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:17:50.808 12:14:44 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:17:50.808 12:14:44 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:17:50.808 12:14:44 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:17:50.808 12:14:44 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:17:50.808 12:14:44 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:17:50.808 12:14:44 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:17:50.808 12:14:44 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:17:50.808 12:14:44 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:17:50.808 12:14:44 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:17:50.808 12:14:44 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:17:50.808 12:14:44 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:50.808 12:14:44 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:17:50.808 12:14:44 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:17:50.808 12:14:44 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:17:50.808 12:14:44 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:17:50.808 12:14:44 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:17:50.808 12:14:44 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:17:50.808 12:14:44 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:17:50.808 12:14:44 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:17:50.808 12:14:44 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:17:50.808 12:14:44 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:17:50.808 12:14:44 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:17:50.808 12:14:44 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:50.808 12:14:44 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:17:50.808 12:14:44 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:17:50.808 12:14:44 -- dd/common.sh@149 -- # [[ y != y ]] 00:17:50.808 12:14:44 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:17:50.808 12:14:44 -- dd/common.sh@156 -- # export liburing_in_use=1 00:17:50.808 12:14:44 -- dd/common.sh@156 -- # liburing_in_use=1 00:17:50.808 12:14:44 -- dd/common.sh@157 -- # return 0 00:17:50.808 12:14:44 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:17:50.808 12:14:44 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:17:50.808 12:14:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:50.808 12:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.808 12:14:44 -- common/autotest_common.sh@10 -- # set +x 00:17:50.808 ************************************ 00:17:50.808 START TEST spdk_dd_basic_rw 00:17:50.808 ************************************ 00:17:50.808 12:14:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:17:51.069 * Looking for test storage... 00:17:51.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:51.069 12:14:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.069 12:14:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.069 12:14:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.069 12:14:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.069 12:14:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.069 12:14:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.069 12:14:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.069 12:14:44 -- paths/export.sh@5 -- # export PATH 00:17:51.069 12:14:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.069 12:14:44 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:17:51.069 12:14:44 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:17:51.069 12:14:44 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:17:51.069 12:14:44 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:17:51.069 12:14:44 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:17:51.069 12:14:44 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:17:51.069 12:14:44 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:17:51.069 12:14:44 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:51.069 12:14:44 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:51.069 12:14:44 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:17:51.069 12:14:44 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:17:51.069 12:14:44 -- dd/common.sh@126 -- # mapfile -t id 00:17:51.069 12:14:44 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:17:51.070 12:14:44 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:17:51.070 12:14:44 -- dd/common.sh@130 -- # lbaf=04 00:17:51.071 12:14:44 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:17:51.071 12:14:44 -- dd/common.sh@132 -- # lbaf=4096 00:17:51.071 12:14:44 -- dd/common.sh@134 -- # echo 4096 00:17:51.071 12:14:44 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:17:51.071 12:14:44 -- dd/basic_rw.sh@96 -- # gen_conf 00:17:51.071 12:14:44 -- dd/basic_rw.sh@96 -- # : 00:17:51.071 12:14:44 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:51.071 12:14:44 -- dd/common.sh@31 -- # xtrace_disable 00:17:51.071 12:14:44 -- common/autotest_common.sh@10 -- # set +x 00:17:51.071 12:14:44 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:51.071 12:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.071 12:14:44 -- common/autotest_common.sh@10 -- # set +x 00:17:51.330 { 00:17:51.330 "subsystems": [ 00:17:51.330 { 00:17:51.330 "subsystem": "bdev", 00:17:51.330 "config": [ 00:17:51.330 { 00:17:51.330 "params": { 00:17:51.330 "trtype": "pcie", 00:17:51.330 "traddr": "0000:00:10.0", 00:17:51.330 "name": "Nvme0" 00:17:51.330 }, 00:17:51.330 "method": "bdev_nvme_attach_controller" 00:17:51.330 }, 00:17:51.330 { 00:17:51.330 "method": "bdev_wait_for_examine" 00:17:51.330 } 00:17:51.330 ] 00:17:51.330 } 00:17:51.330 ] 00:17:51.330 } 00:17:51.330 ************************************ 00:17:51.330 START TEST dd_bs_lt_native_bs 00:17:51.330 ************************************ 00:17:51.330 12:14:44 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:51.330 12:14:44 -- common/autotest_common.sh@638 -- # local es=0 00:17:51.330 12:14:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:51.330 12:14:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.330 12:14:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:51.331 12:14:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.331 12:14:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:51.331 12:14:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.331 12:14:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:51.331 12:14:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.331 12:14:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:51.331 12:14:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:51.331 [2024-04-26 12:14:44.641685] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:51.331 [2024-04-26 12:14:44.641793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62469 ] 00:17:51.331 [2024-04-26 12:14:44.787696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.589 [2024-04-26 12:14:44.925815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.847 [2024-04-26 12:14:45.080035] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:17:51.847 [2024-04-26 12:14:45.080130] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.847 [2024-04-26 12:14:45.204955] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:17:52.107 12:14:45 -- common/autotest_common.sh@641 -- # es=234 00:17:52.107 12:14:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:52.107 12:14:45 -- common/autotest_common.sh@650 -- # es=106 00:17:52.107 12:14:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:52.107 12:14:45 -- common/autotest_common.sh@658 -- # es=1 00:17:52.107 12:14:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:52.107 00:17:52.107 real 0m0.738s 00:17:52.107 user 0m0.469s 00:17:52.107 sys 0m0.164s 00:17:52.107 12:14:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:52.107 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:17:52.107 ************************************ 00:17:52.107 END TEST dd_bs_lt_native_bs 00:17:52.107 ************************************ 00:17:52.107 12:14:45 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:17:52.107 12:14:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:52.107 12:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.107 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:17:52.107 ************************************ 00:17:52.107 START TEST dd_rw 00:17:52.107 ************************************ 00:17:52.107 12:14:45 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:17:52.107 12:14:45 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:17:52.107 12:14:45 -- dd/basic_rw.sh@12 -- # local count size 00:17:52.107 12:14:45 -- dd/basic_rw.sh@13 -- # local qds bss 00:17:52.107 12:14:45 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:17:52.107 12:14:45 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:17:52.107 12:14:45 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:17:52.107 12:14:45 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:17:52.107 12:14:45 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:17:52.107 12:14:45 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:17:52.107 12:14:45 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:17:52.107 12:14:45 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:52.107 12:14:45 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:52.107 12:14:45 -- dd/basic_rw.sh@23 -- # count=15 00:17:52.107 12:14:45 -- dd/basic_rw.sh@24 -- # count=15 00:17:52.107 12:14:45 -- dd/basic_rw.sh@25 -- # size=61440 00:17:52.107 12:14:45 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:17:52.107 12:14:45 -- dd/common.sh@98 -- # xtrace_disable 00:17:52.107 12:14:45 -- common/autotest_common.sh@10 -- # set +x 00:17:52.675 12:14:46 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:17:52.675 12:14:46 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:52.675 12:14:46 -- dd/common.sh@31 -- # xtrace_disable 00:17:52.675 12:14:46 -- common/autotest_common.sh@10 -- # set +x 00:17:52.932 [2024-04-26 12:14:46.146616] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:52.933 [2024-04-26 12:14:46.147145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62504 ] 00:17:52.933 { 00:17:52.933 "subsystems": [ 00:17:52.933 { 00:17:52.933 "subsystem": "bdev", 00:17:52.933 "config": [ 00:17:52.933 { 00:17:52.933 "params": { 00:17:52.933 "trtype": "pcie", 00:17:52.933 "traddr": "0000:00:10.0", 00:17:52.933 "name": "Nvme0" 00:17:52.933 }, 00:17:52.933 "method": "bdev_nvme_attach_controller" 00:17:52.933 }, 00:17:52.933 { 00:17:52.933 "method": "bdev_wait_for_examine" 00:17:52.933 } 00:17:52.933 ] 00:17:52.933 } 00:17:52.933 ] 00:17:52.933 } 00:17:52.933 [2024-04-26 12:14:46.282891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.191 [2024-04-26 12:14:46.403244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.449  Copying: 60/60 [kB] (average 19 MBps) 00:17:53.449 00:17:53.449 12:14:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:17:53.449 12:14:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:53.449 12:14:46 -- dd/common.sh@31 -- # xtrace_disable 00:17:53.449 12:14:46 -- common/autotest_common.sh@10 -- # set +x 00:17:53.449 [2024-04-26 12:14:46.869624] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:53.449 [2024-04-26 12:14:46.869711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62523 ] 00:17:53.449 { 00:17:53.449 "subsystems": [ 00:17:53.449 { 00:17:53.449 "subsystem": "bdev", 00:17:53.449 "config": [ 00:17:53.449 { 00:17:53.449 "params": { 00:17:53.449 "trtype": "pcie", 00:17:53.449 "traddr": "0000:00:10.0", 00:17:53.449 "name": "Nvme0" 00:17:53.449 }, 00:17:53.449 "method": "bdev_nvme_attach_controller" 00:17:53.449 }, 00:17:53.449 { 00:17:53.449 "method": "bdev_wait_for_examine" 00:17:53.449 } 00:17:53.449 ] 00:17:53.449 } 00:17:53.449 ] 00:17:53.449 } 00:17:53.708 [2024-04-26 12:14:47.003111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.708 [2024-04-26 12:14:47.113990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.224  Copying: 60/60 [kB] (average 19 MBps) 00:17:54.224 00:17:54.224 12:14:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:54.224 12:14:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:17:54.224 12:14:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:54.224 12:14:47 -- dd/common.sh@11 -- # local nvme_ref= 00:17:54.224 12:14:47 -- dd/common.sh@12 -- # local size=61440 00:17:54.224 12:14:47 -- dd/common.sh@14 -- # local bs=1048576 00:17:54.224 12:14:47 -- dd/common.sh@15 -- # local count=1 00:17:54.224 12:14:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:54.224 12:14:47 -- dd/common.sh@18 -- # gen_conf 00:17:54.224 12:14:47 -- dd/common.sh@31 -- # xtrace_disable 00:17:54.225 12:14:47 -- common/autotest_common.sh@10 -- # set +x 00:17:54.225 [2024-04-26 12:14:47.582876] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:54.225 [2024-04-26 12:14:47.582983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62539 ] 00:17:54.225 { 00:17:54.225 "subsystems": [ 00:17:54.225 { 00:17:54.225 "subsystem": "bdev", 00:17:54.225 "config": [ 00:17:54.225 { 00:17:54.225 "params": { 00:17:54.225 "trtype": "pcie", 00:17:54.225 "traddr": "0000:00:10.0", 00:17:54.225 "name": "Nvme0" 00:17:54.225 }, 00:17:54.225 "method": "bdev_nvme_attach_controller" 00:17:54.225 }, 00:17:54.225 { 00:17:54.225 "method": "bdev_wait_for_examine" 00:17:54.225 } 00:17:54.225 ] 00:17:54.225 } 00:17:54.225 ] 00:17:54.225 } 00:17:54.483 [2024-04-26 12:14:47.721560] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.483 [2024-04-26 12:14:47.838301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.035  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:55.035 00:17:55.035 12:14:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:55.035 12:14:48 -- dd/basic_rw.sh@23 -- # count=15 00:17:55.035 12:14:48 -- dd/basic_rw.sh@24 -- # count=15 00:17:55.035 12:14:48 -- dd/basic_rw.sh@25 -- # size=61440 00:17:55.035 12:14:48 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:17:55.035 12:14:48 -- dd/common.sh@98 -- # xtrace_disable 00:17:55.035 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:17:55.602 12:14:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:17:55.602 12:14:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:55.602 12:14:48 -- dd/common.sh@31 -- # xtrace_disable 00:17:55.602 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:17:55.602 [2024-04-26 12:14:48.944768] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:55.602 [2024-04-26 12:14:48.944873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62563 ] 00:17:55.602 { 00:17:55.602 "subsystems": [ 00:17:55.602 { 00:17:55.602 "subsystem": "bdev", 00:17:55.602 "config": [ 00:17:55.602 { 00:17:55.602 "params": { 00:17:55.602 "trtype": "pcie", 00:17:55.602 "traddr": "0000:00:10.0", 00:17:55.602 "name": "Nvme0" 00:17:55.602 }, 00:17:55.602 "method": "bdev_nvme_attach_controller" 00:17:55.602 }, 00:17:55.602 { 00:17:55.602 "method": "bdev_wait_for_examine" 00:17:55.602 } 00:17:55.602 ] 00:17:55.602 } 00:17:55.602 ] 00:17:55.602 } 00:17:55.861 [2024-04-26 12:14:49.079449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.861 [2024-04-26 12:14:49.221412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.378  Copying: 60/60 [kB] (average 58 MBps) 00:17:56.378 00:17:56.378 12:14:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:56.378 12:14:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:17:56.378 12:14:49 -- dd/common.sh@31 -- # xtrace_disable 00:17:56.378 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:17:56.378 [2024-04-26 12:14:49.680951] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:56.378 [2024-04-26 12:14:49.681062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62577 ] 00:17:56.378 { 00:17:56.378 "subsystems": [ 00:17:56.378 { 00:17:56.378 "subsystem": "bdev", 00:17:56.378 "config": [ 00:17:56.378 { 00:17:56.378 "params": { 00:17:56.378 "trtype": "pcie", 00:17:56.378 "traddr": "0000:00:10.0", 00:17:56.378 "name": "Nvme0" 00:17:56.378 }, 00:17:56.378 "method": "bdev_nvme_attach_controller" 00:17:56.378 }, 00:17:56.378 { 00:17:56.378 "method": "bdev_wait_for_examine" 00:17:56.378 } 00:17:56.378 ] 00:17:56.378 } 00:17:56.378 ] 00:17:56.378 } 00:17:56.378 [2024-04-26 12:14:49.818474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.636 [2024-04-26 12:14:49.951305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.152  Copying: 60/60 [kB] (average 58 MBps) 00:17:57.152 00:17:57.152 12:14:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:57.152 12:14:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:17:57.152 12:14:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:57.152 12:14:50 -- dd/common.sh@11 -- # local nvme_ref= 00:17:57.152 12:14:50 -- dd/common.sh@12 -- # local size=61440 00:17:57.152 12:14:50 -- dd/common.sh@14 -- # local bs=1048576 00:17:57.152 12:14:50 -- dd/common.sh@15 -- # local count=1 00:17:57.152 12:14:50 -- dd/common.sh@18 -- # gen_conf 00:17:57.152 12:14:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:57.152 12:14:50 -- dd/common.sh@31 -- # xtrace_disable 00:17:57.152 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:17:57.152 [2024-04-26 12:14:50.433162] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:57.152 [2024-04-26 12:14:50.433283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62592 ] 00:17:57.152 { 00:17:57.152 "subsystems": [ 00:17:57.152 { 00:17:57.152 "subsystem": "bdev", 00:17:57.152 "config": [ 00:17:57.152 { 00:17:57.152 "params": { 00:17:57.152 "trtype": "pcie", 00:17:57.152 "traddr": "0000:00:10.0", 00:17:57.152 "name": "Nvme0" 00:17:57.152 }, 00:17:57.152 "method": "bdev_nvme_attach_controller" 00:17:57.152 }, 00:17:57.152 { 00:17:57.152 "method": "bdev_wait_for_examine" 00:17:57.152 } 00:17:57.152 ] 00:17:57.152 } 00:17:57.152 ] 00:17:57.152 } 00:17:57.152 [2024-04-26 12:14:50.569513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.410 [2024-04-26 12:14:50.689336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.726  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:57.726 00:17:57.726 12:14:51 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:57.726 12:14:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:57.726 12:14:51 -- dd/basic_rw.sh@23 -- # count=7 00:17:57.726 12:14:51 -- dd/basic_rw.sh@24 -- # count=7 00:17:57.726 12:14:51 -- dd/basic_rw.sh@25 -- # size=57344 00:17:57.726 12:14:51 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:17:57.726 12:14:51 -- dd/common.sh@98 -- # xtrace_disable 00:17:57.726 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:17:58.292 12:14:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:17:58.292 12:14:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:58.292 12:14:51 -- dd/common.sh@31 -- # xtrace_disable 00:17:58.292 12:14:51 -- common/autotest_common.sh@10 -- # set +x 00:17:58.550 { 00:17:58.551 "subsystems": [ 00:17:58.551 { 00:17:58.551 "subsystem": "bdev", 00:17:58.551 "config": [ 00:17:58.551 { 00:17:58.551 "params": { 00:17:58.551 "trtype": "pcie", 00:17:58.551 "traddr": "0000:00:10.0", 00:17:58.551 "name": "Nvme0" 00:17:58.551 }, 00:17:58.551 "method": "bdev_nvme_attach_controller" 00:17:58.551 }, 00:17:58.551 { 00:17:58.551 "method": "bdev_wait_for_examine" 00:17:58.551 } 00:17:58.551 ] 00:17:58.551 } 00:17:58.551 ] 00:17:58.551 } 00:17:58.551 [2024-04-26 12:14:51.777750] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:58.551 [2024-04-26 12:14:51.777835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62617 ] 00:17:58.551 [2024-04-26 12:14:51.907646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.809 [2024-04-26 12:14:52.053074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.067  Copying: 56/56 [kB] (average 27 MBps) 00:17:59.067 00:17:59.067 12:14:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:17:59.067 12:14:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:59.067 12:14:52 -- dd/common.sh@31 -- # xtrace_disable 00:17:59.067 12:14:52 -- common/autotest_common.sh@10 -- # set +x 00:17:59.067 [2024-04-26 12:14:52.523534] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:59.067 [2024-04-26 12:14:52.523644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62630 ] 00:17:59.067 { 00:17:59.067 "subsystems": [ 00:17:59.067 { 00:17:59.067 "subsystem": "bdev", 00:17:59.067 "config": [ 00:17:59.067 { 00:17:59.067 "params": { 00:17:59.067 "trtype": "pcie", 00:17:59.067 "traddr": "0000:00:10.0", 00:17:59.067 "name": "Nvme0" 00:17:59.067 }, 00:17:59.067 "method": "bdev_nvme_attach_controller" 00:17:59.067 }, 00:17:59.067 { 00:17:59.067 "method": "bdev_wait_for_examine" 00:17:59.067 } 00:17:59.067 ] 00:17:59.067 } 00:17:59.067 ] 00:17:59.067 } 00:17:59.324 [2024-04-26 12:14:52.659919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.324 [2024-04-26 12:14:52.777925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.843  Copying: 56/56 [kB] (average 27 MBps) 00:17:59.843 00:17:59.843 12:14:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:59.843 12:14:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:59.843 12:14:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:59.843 12:14:53 -- dd/common.sh@11 -- # local nvme_ref= 00:17:59.843 12:14:53 -- dd/common.sh@12 -- # local size=57344 00:17:59.843 12:14:53 -- dd/common.sh@14 -- # local bs=1048576 00:17:59.843 12:14:53 -- dd/common.sh@15 -- # local count=1 00:17:59.843 12:14:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:59.843 12:14:53 -- dd/common.sh@18 -- # gen_conf 00:17:59.843 12:14:53 -- dd/common.sh@31 -- # xtrace_disable 00:17:59.843 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:17:59.843 [2024-04-26 12:14:53.240056] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:17:59.843 [2024-04-26 12:14:53.240151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62651 ] 00:17:59.843 { 00:17:59.843 "subsystems": [ 00:17:59.843 { 00:17:59.843 "subsystem": "bdev", 00:17:59.843 "config": [ 00:17:59.843 { 00:17:59.843 "params": { 00:17:59.843 "trtype": "pcie", 00:17:59.843 "traddr": "0000:00:10.0", 00:17:59.843 "name": "Nvme0" 00:17:59.843 }, 00:17:59.843 "method": "bdev_nvme_attach_controller" 00:17:59.843 }, 00:17:59.843 { 00:17:59.843 "method": "bdev_wait_for_examine" 00:17:59.843 } 00:17:59.843 ] 00:17:59.843 } 00:17:59.843 ] 00:17:59.843 } 00:18:00.101 [2024-04-26 12:14:53.374147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.101 [2024-04-26 12:14:53.498594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.618  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:00.618 00:18:00.618 12:14:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:00.618 12:14:53 -- dd/basic_rw.sh@23 -- # count=7 00:18:00.618 12:14:53 -- dd/basic_rw.sh@24 -- # count=7 00:18:00.618 12:14:53 -- dd/basic_rw.sh@25 -- # size=57344 00:18:00.618 12:14:53 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:18:00.618 12:14:53 -- dd/common.sh@98 -- # xtrace_disable 00:18:00.618 12:14:53 -- common/autotest_common.sh@10 -- # set +x 00:18:01.186 12:14:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:18:01.186 12:14:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:01.186 12:14:54 -- dd/common.sh@31 -- # xtrace_disable 00:18:01.186 12:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:01.186 { 00:18:01.186 "subsystems": [ 00:18:01.186 { 00:18:01.186 "subsystem": "bdev", 00:18:01.186 "config": [ 00:18:01.186 { 00:18:01.186 "params": { 00:18:01.186 "trtype": "pcie", 00:18:01.186 "traddr": "0000:00:10.0", 00:18:01.186 "name": "Nvme0" 00:18:01.186 }, 00:18:01.186 "method": "bdev_nvme_attach_controller" 00:18:01.186 }, 00:18:01.186 { 00:18:01.186 "method": "bdev_wait_for_examine" 00:18:01.186 } 00:18:01.186 ] 00:18:01.186 } 00:18:01.186 ] 00:18:01.186 } 00:18:01.186 [2024-04-26 12:14:54.586512] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:01.186 [2024-04-26 12:14:54.586627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62671 ] 00:18:01.444 [2024-04-26 12:14:54.731597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.444 [2024-04-26 12:14:54.861536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.962  Copying: 56/56 [kB] (average 54 MBps) 00:18:01.962 00:18:01.962 12:14:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:18:01.962 12:14:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:18:01.962 12:14:55 -- dd/common.sh@31 -- # xtrace_disable 00:18:01.962 12:14:55 -- common/autotest_common.sh@10 -- # set +x 00:18:01.962 { 00:18:01.962 "subsystems": [ 00:18:01.962 { 00:18:01.962 "subsystem": "bdev", 00:18:01.962 "config": [ 00:18:01.962 { 00:18:01.962 "params": { 00:18:01.962 "trtype": "pcie", 00:18:01.962 "traddr": "0000:00:10.0", 00:18:01.962 "name": "Nvme0" 00:18:01.962 }, 00:18:01.962 "method": "bdev_nvme_attach_controller" 00:18:01.962 }, 00:18:01.962 { 00:18:01.962 "method": "bdev_wait_for_examine" 00:18:01.962 } 00:18:01.962 ] 00:18:01.962 } 00:18:01.962 ] 00:18:01.962 } 00:18:01.962 [2024-04-26 12:14:55.333654] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:01.962 [2024-04-26 12:14:55.333753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62690 ] 00:18:02.219 [2024-04-26 12:14:55.474915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.220 [2024-04-26 12:14:55.607158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.735  Copying: 56/56 [kB] (average 54 MBps) 00:18:02.735 00:18:02.735 12:14:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:02.735 12:14:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:18:02.735 12:14:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:02.735 12:14:56 -- dd/common.sh@11 -- # local nvme_ref= 00:18:02.735 12:14:56 -- dd/common.sh@12 -- # local size=57344 00:18:02.735 12:14:56 -- dd/common.sh@14 -- # local bs=1048576 00:18:02.735 12:14:56 -- dd/common.sh@15 -- # local count=1 00:18:02.735 12:14:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:02.735 12:14:56 -- dd/common.sh@18 -- # gen_conf 00:18:02.735 12:14:56 -- dd/common.sh@31 -- # xtrace_disable 00:18:02.735 12:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:02.735 { 00:18:02.735 "subsystems": [ 00:18:02.735 { 00:18:02.735 "subsystem": "bdev", 00:18:02.735 "config": [ 00:18:02.735 { 00:18:02.735 "params": { 00:18:02.735 "trtype": "pcie", 00:18:02.735 "traddr": "0000:00:10.0", 00:18:02.735 "name": "Nvme0" 00:18:02.735 }, 00:18:02.735 "method": "bdev_nvme_attach_controller" 00:18:02.735 }, 00:18:02.735 { 00:18:02.735 "method": "bdev_wait_for_examine" 00:18:02.735 } 00:18:02.735 ] 00:18:02.735 } 00:18:02.735 ] 00:18:02.735 } 00:18:02.735 [2024-04-26 12:14:56.079978] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:02.735 [2024-04-26 12:14:56.080088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62706 ] 00:18:02.993 [2024-04-26 12:14:56.221501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.993 [2024-04-26 12:14:56.336915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.509  Copying: 1024/1024 [kB] (average 500 MBps) 00:18:03.509 00:18:03.509 12:14:56 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:03.509 12:14:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:03.509 12:14:56 -- dd/basic_rw.sh@23 -- # count=3 00:18:03.510 12:14:56 -- dd/basic_rw.sh@24 -- # count=3 00:18:03.510 12:14:56 -- dd/basic_rw.sh@25 -- # size=49152 00:18:03.510 12:14:56 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:18:03.510 12:14:56 -- dd/common.sh@98 -- # xtrace_disable 00:18:03.510 12:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:04.076 12:14:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:18:04.076 12:14:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:04.076 12:14:57 -- dd/common.sh@31 -- # xtrace_disable 00:18:04.076 12:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:04.076 [2024-04-26 12:14:57.334116] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:04.076 [2024-04-26 12:14:57.334232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62729 ] 00:18:04.076 { 00:18:04.076 "subsystems": [ 00:18:04.076 { 00:18:04.076 "subsystem": "bdev", 00:18:04.076 "config": [ 00:18:04.076 { 00:18:04.076 "params": { 00:18:04.076 "trtype": "pcie", 00:18:04.076 "traddr": "0000:00:10.0", 00:18:04.076 "name": "Nvme0" 00:18:04.076 }, 00:18:04.076 "method": "bdev_nvme_attach_controller" 00:18:04.076 }, 00:18:04.076 { 00:18:04.076 "method": "bdev_wait_for_examine" 00:18:04.076 } 00:18:04.076 ] 00:18:04.076 } 00:18:04.076 ] 00:18:04.076 } 00:18:04.076 [2024-04-26 12:14:57.464024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.334 [2024-04-26 12:14:57.567530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.593  Copying: 48/48 [kB] (average 46 MBps) 00:18:04.593 00:18:04.593 12:14:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:18:04.593 12:14:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:18:04.593 12:14:57 -- dd/common.sh@31 -- # xtrace_disable 00:18:04.593 12:14:57 -- common/autotest_common.sh@10 -- # set +x 00:18:04.593 [2024-04-26 12:14:58.001881] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:04.593 [2024-04-26 12:14:58.001984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:18:04.593 { 00:18:04.593 "subsystems": [ 00:18:04.593 { 00:18:04.593 "subsystem": "bdev", 00:18:04.593 "config": [ 00:18:04.593 { 00:18:04.593 "params": { 00:18:04.593 "trtype": "pcie", 00:18:04.593 "traddr": "0000:00:10.0", 00:18:04.593 "name": "Nvme0" 00:18:04.593 }, 00:18:04.593 "method": "bdev_nvme_attach_controller" 00:18:04.593 }, 00:18:04.593 { 00:18:04.593 "method": "bdev_wait_for_examine" 00:18:04.593 } 00:18:04.593 ] 00:18:04.593 } 00:18:04.593 ] 00:18:04.593 } 00:18:04.851 [2024-04-26 12:14:58.132296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.851 [2024-04-26 12:14:58.245753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.367  Copying: 48/48 [kB] (average 46 MBps) 00:18:05.367 00:18:05.367 12:14:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:05.367 12:14:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:18:05.367 12:14:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:05.367 12:14:58 -- dd/common.sh@11 -- # local nvme_ref= 00:18:05.367 12:14:58 -- dd/common.sh@12 -- # local size=49152 00:18:05.367 12:14:58 -- dd/common.sh@14 -- # local bs=1048576 00:18:05.367 12:14:58 -- dd/common.sh@15 -- # local count=1 00:18:05.367 12:14:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:05.367 12:14:58 -- dd/common.sh@18 -- # gen_conf 00:18:05.367 12:14:58 -- dd/common.sh@31 -- # xtrace_disable 00:18:05.367 12:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:05.367 { 00:18:05.367 "subsystems": [ 00:18:05.367 { 00:18:05.367 "subsystem": "bdev", 00:18:05.367 "config": [ 00:18:05.367 { 00:18:05.367 "params": { 00:18:05.367 "trtype": "pcie", 00:18:05.367 "traddr": "0000:00:10.0", 00:18:05.367 "name": "Nvme0" 00:18:05.367 }, 00:18:05.367 "method": "bdev_nvme_attach_controller" 00:18:05.367 }, 00:18:05.367 { 00:18:05.367 "method": "bdev_wait_for_examine" 00:18:05.367 } 00:18:05.367 ] 00:18:05.367 } 00:18:05.367 ] 00:18:05.367 } 00:18:05.367 [2024-04-26 12:14:58.711679] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:05.367 [2024-04-26 12:14:58.711836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62760 ] 00:18:05.625 [2024-04-26 12:14:58.848749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.625 [2024-04-26 12:14:58.989261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.142  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:06.142 00:18:06.142 12:14:59 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:06.142 12:14:59 -- dd/basic_rw.sh@23 -- # count=3 00:18:06.142 12:14:59 -- dd/basic_rw.sh@24 -- # count=3 00:18:06.142 12:14:59 -- dd/basic_rw.sh@25 -- # size=49152 00:18:06.142 12:14:59 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:18:06.142 12:14:59 -- dd/common.sh@98 -- # xtrace_disable 00:18:06.142 12:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.707 12:14:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:18:06.707 12:14:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:06.707 12:14:59 -- dd/common.sh@31 -- # xtrace_disable 00:18:06.707 12:14:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.707 [2024-04-26 12:14:59.948325] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:06.708 [2024-04-26 12:14:59.948401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62784 ] 00:18:06.708 { 00:18:06.708 "subsystems": [ 00:18:06.708 { 00:18:06.708 "subsystem": "bdev", 00:18:06.708 "config": [ 00:18:06.708 { 00:18:06.708 "params": { 00:18:06.708 "trtype": "pcie", 00:18:06.708 "traddr": "0000:00:10.0", 00:18:06.708 "name": "Nvme0" 00:18:06.708 }, 00:18:06.708 "method": "bdev_nvme_attach_controller" 00:18:06.708 }, 00:18:06.708 { 00:18:06.708 "method": "bdev_wait_for_examine" 00:18:06.708 } 00:18:06.708 ] 00:18:06.708 } 00:18:06.708 ] 00:18:06.708 } 00:18:06.708 [2024-04-26 12:15:00.085904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.974 [2024-04-26 12:15:00.216457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.235  Copying: 48/48 [kB] (average 46 MBps) 00:18:07.235 00:18:07.235 12:15:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:18:07.235 12:15:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:18:07.235 12:15:00 -- dd/common.sh@31 -- # xtrace_disable 00:18:07.235 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:18:07.235 [2024-04-26 12:15:00.681616] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:07.235 [2024-04-26 12:15:00.681723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62797 ] 00:18:07.235 { 00:18:07.235 "subsystems": [ 00:18:07.235 { 00:18:07.235 "subsystem": "bdev", 00:18:07.235 "config": [ 00:18:07.235 { 00:18:07.235 "params": { 00:18:07.235 "trtype": "pcie", 00:18:07.235 "traddr": "0000:00:10.0", 00:18:07.235 "name": "Nvme0" 00:18:07.235 }, 00:18:07.235 "method": "bdev_nvme_attach_controller" 00:18:07.235 }, 00:18:07.235 { 00:18:07.235 "method": "bdev_wait_for_examine" 00:18:07.235 } 00:18:07.235 ] 00:18:07.235 } 00:18:07.235 ] 00:18:07.235 } 00:18:07.493 [2024-04-26 12:15:00.821535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.493 [2024-04-26 12:15:00.934890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.011  Copying: 48/48 [kB] (average 46 MBps) 00:18:08.011 00:18:08.011 12:15:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:08.011 12:15:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:18:08.011 12:15:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:08.011 12:15:01 -- dd/common.sh@11 -- # local nvme_ref= 00:18:08.011 12:15:01 -- dd/common.sh@12 -- # local size=49152 00:18:08.011 12:15:01 -- dd/common.sh@14 -- # local bs=1048576 00:18:08.011 12:15:01 -- dd/common.sh@15 -- # local count=1 00:18:08.011 12:15:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:08.011 12:15:01 -- dd/common.sh@18 -- # gen_conf 00:18:08.011 12:15:01 -- dd/common.sh@31 -- # xtrace_disable 00:18:08.011 12:15:01 -- common/autotest_common.sh@10 -- # set +x 00:18:08.011 [2024-04-26 12:15:01.388093] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:08.011 [2024-04-26 12:15:01.388191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62813 ] 00:18:08.011 { 00:18:08.011 "subsystems": [ 00:18:08.011 { 00:18:08.011 "subsystem": "bdev", 00:18:08.012 "config": [ 00:18:08.012 { 00:18:08.012 "params": { 00:18:08.012 "trtype": "pcie", 00:18:08.012 "traddr": "0000:00:10.0", 00:18:08.012 "name": "Nvme0" 00:18:08.012 }, 00:18:08.012 "method": "bdev_nvme_attach_controller" 00:18:08.012 }, 00:18:08.012 { 00:18:08.012 "method": "bdev_wait_for_examine" 00:18:08.012 } 00:18:08.012 ] 00:18:08.012 } 00:18:08.012 ] 00:18:08.012 } 00:18:08.270 [2024-04-26 12:15:01.521094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.270 [2024-04-26 12:15:01.639473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.787  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:08.787 00:18:08.787 00:18:08.787 real 0m16.614s 00:18:08.787 user 0m12.590s 00:18:08.787 sys 0m5.518s 00:18:08.787 12:15:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.787 ************************************ 00:18:08.787 END TEST dd_rw 00:18:08.787 ************************************ 00:18:08.787 12:15:02 -- common/autotest_common.sh@10 -- # set +x 00:18:08.787 12:15:02 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:18:08.787 12:15:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:08.787 12:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.787 12:15:02 -- common/autotest_common.sh@10 -- # set +x 00:18:08.787 ************************************ 00:18:08.787 START TEST dd_rw_offset 00:18:08.787 ************************************ 00:18:08.787 12:15:02 -- common/autotest_common.sh@1111 -- # basic_offset 00:18:08.787 12:15:02 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:18:08.787 12:15:02 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:18:08.787 12:15:02 -- dd/common.sh@98 -- # xtrace_disable 00:18:08.787 12:15:02 -- common/autotest_common.sh@10 -- # set +x 00:18:08.787 12:15:02 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:18:08.787 12:15:02 -- dd/basic_rw.sh@56 -- # data=b0l4aoei6wf1t6vfohub564fxa0n495ix2zdo7kt1b9v83h7rqf3v8sa0tgzj1glwbw02at80uhhi8id85fecxo17ru2mq19khe2pxmolu5f1kot9wosu5ws04mmo5h69cbzwcsssy53j0cemm4gosv3fq6ei6u7q267dh9v8etuockr2030r129vt50hxmwx2jxxrhkqhna0x0bknlvoezxkqvs9ip410592gerf0w0kmbi4bxyu5cxr5j03sjxq49cnqsebqs79dag9133g1b1nicap07xez1pp1v941v6i889n07zjjxuutqdlw1o8s6gpar7wsvwqx6psgvtb6fhtqvguxlw4abb9g4ewsxlmw7qysnp1412huz3jv5ek612dy8tofqbge4wodjfhbj5jg10896shruiw5ktncmoez10q7czrg9mwtg52xotpcqq17qj3twlfq0h01qoyrkqdu83hhye2je13nc881bvq878y29a2acmoglodizz9u4knnxm4mfal0zwoftbwptrlg9tjndl6ispmr735ptwg5eu2xlrlhltebj3pxxyu7vjmypdm13s6m579x2fgasm7539sqk84j6gqswd09bu9z0lyv6sfckjj0svd3iekm96h4eachsionyredshfi7zlan8i9tghioot2et5gxb39zm0aphz0k2xy1u74v4w8ebttd9z5uqnpa4ery53724df9260w5jftkslzignouqyb05fnmnw9gayxv41n7eubc9p7sr30swyjd4akomrsnuld14e9jc8ebbgmbxb1dq48gdqqa96jepinghmc6ypyjbihayiy763d1j63o4sjlxinjh5hcdefyz3uzk7wqjowhyc90jtv2et9qkhugz0mnejkop49ruysyqeo17u9efr8ee09s5hcimq1eob5qfidhl3werib5k5y49i485q7jkxxtvc7y9qm2riji3mm01s9c1au7yc4nsy83j3vad55nq7cqz52nvfqunl5ydoochx14qqu82d1x58rkig18f1w2zn5mkvyokq067flsmtkeu6rt51ur1ghtzrvdiw73vbp1o7klo2lcqnj56o985fmmh7afx7qxaxeoqlq0451br6n5qa58exyh64pzgsftqrkubquotobcbo4lv2lj3ntzpxy4hkd3f2pw1kurduisq1fpc7wluzvsxyddeqz5v6gnpt7sw7rdt5oodh2kyoltwurm9ox0to0vejnlqds1qpsz8ir78w6fvh8atvodq8mg4090c2n87besu9widhi5luw3vhulai6wbsch5ybznuootiss6mi0ywcfm1iqq8zavxil5mhhsitmec8sdk7j3qemtyukbm4m1whsz8ycb6trx5ckdwos945h04b9xn7i37mzri60w8u4b1cytqapex1kdntee1751z0yt5e81ufpgthbfr43weggiwcs02ou58rs71zmehdckm3i04lov6cb2kubw3jecrn0civlqppgnm9m1hybc8bbt0ij65cwyxecutblqlpomvrl7u6ggf5hqjky9vroz366esarirsgqfe3rtxvhikyas3oi9s44oj0en7xvj6zg6tjjggotyi4iw0jfovnzgxipldzhrxci0apfwsqe83gowc4iuvkr2q4fm1w31rkamvo8bs46zeibbmn5zvl31zxalkl83cpmb07xw4qsvkk5aaqub3uog0uv1w4xy2yiwuo5evcfzu4i5pxazje99a55irou7ysavzjri897ofq6wy6quujyuunua22zbpchnhhsfw7fzgh5lf1nydepnt52fjjn6b1fbr0tegfbe28w4pl4a6zc4uba68l58p698bwt54q08o3512ogqvopfhh0wr93vo2g06eu60qy37fevh893le0nz1agreghyk70n9yhpfeyapapakbu8ov7pi0epaam3g54wrtnqkbq3mrld3ou6dkgwyn82l48jg9ii8vknup62obqfazsc6f7q982pxi60jvt22c4kvlpinggzbqi2u244o2esaphqffrtfjxtracidljuhmlgvh043t3bfcmlzsmtkvxeet7x6cnsyi41mlt2c414ntopcf3c3b01nn8zej5jql4b43y8zvrmcm4na5o4h5k2wtpzj0nleynptgzzhsijyh02dmgr3e0w854v8slal58gc6ol2qqizcb9qitsp7s9ajotmp5z2dhoeh7j8qfktirafz7alwg6ma2d03rbfgxsxwrfchhf8xr36t2aaz2qvxi1wz71gzoxlyovrrzpt5t4lryqaz3qty9bdt650fv1itmb1pmq6tmp0b2gj2h7r4fjfgewquc7az9wmap2y6mqivo1zct207zkxyqwqh6n6q9knhtbyks1tgbbz35g714g0463at5xl3p0fwicel6rcdmnv07gjyo798xlan1lgo819ch7lfyok0qka381falh7th52foezt7exqy1e66mz2mgvrumz79encoyw8zaxrzzaw3v97j5fysuwdxyabgq5696o1vzv6pbidzaudw4zk3lsw8l7p0rzvf80a9ryw7b4l4mm54a8iuro6e4zek4xatywpqc6269pyz3w6n3e898dlmp5ohpwxkkctqf4coxzt9tht8krj8wgzsbp6x6ljglovfb6uo2a4rzx3yyo6un8hlq0gu7k7e0im5wc6qfhjbq5oqrsx2itqdvntbyk22l8ht2uugib7upb4xluo1atcs5nqkyg4h5pd4yk05q1rp0rne7xl2rie1hbtvdxsv2vw4cztb58256dqvl1xmcqtgpn3myd76r3ef82fhz9wjdo2id8uhzx745hnl9yo8i3ewynsfy4h2tpzn4f6451r0wbcaxcvk0xr191x61ilmjfjncakn5dw3suc70bz74ldodt1ix3x262ayn674sqn0w92t1losisx2mudqf0eru2nrn6onb4upm7kx53evgtnrxhij9dmezl1akx7zbcp1kosfpfbx0bqguao530yn7dyxpzx0opk4w15ej477p86vtgi57kgmb3ftufjsst2if781ba7iwizvmkori5hcn92ajjpgfizeuhtd7c31myvzz0bbz6i5shkk2kt60f7utz5g373fhg1omr5o9zm9e99i5j9y10l8jtflz8siabwlzwgauautgf908uivish7yy6q5ppc8pg4cs43izc6tl63h8rreaacb0q4w6f7nrisqqsuyef7e2vfaxfrl6vus4patjc2vfv2pnf1qi3opb9e6h64b1i5ktu5pcjurxioutd57zb3vze5cmi0coulvo74d6kua8jpf8h3zea7rugxyrcm670up9uulgns1776lw1c4ndjq7rte4k45obi5lgidfdyce0we8mg7uugoczczjnzcl7akx84cqa89rp5i4dk6uyo7mpkruojmn8jdehs3jqnfaxcso1n9ubfnudtrr5hf3tzbkd9uape14yxidmagamdji2v6ezipf0qwvo6m49bwfa2pvs90ci9els76zmuuxoxcws8bauia3pw50d1lwssvc6vv5qab3pvoiqfkcmzni76sjqzbxy4ha66xsfifajt4nyv5pq90p5nd8omrgvmdnt3if0105l8yowk3r2oezxf0dahxvkwijpo88wy4y7j4eeq1ezt7pb1b6v9olrpul74m2l87qtytjh10swewgcpzr253gnwe2xyyjv3rw5ej8nbm0kijpfbagcy1ccjjgf98sknv8dx5knji0q9tgtntwcsfs8zz38zmd7qp58c88l872xllwlq3hiu5mopoo1nnaczdlz8j2momv47h1fp3knppmtqnz7rid90zzy7sxn76tflhd6jor4nlck5e25l8cnkhsua6ce8uaptc4bfy5np8jaofj2mp6854fyk7c716h3g3iigykg9nnpg3vezzrr6n01jf6v5vom2mpf9gnveq84ckt097o1g8z8jk9wvd02z7iu037881w86pw06zgx34wacf8r9k2l2qwvptud46xp633afujyw0581lrixgn 00:18:08.787 12:15:02 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:18:08.787 12:15:02 -- dd/basic_rw.sh@59 -- # gen_conf 00:18:08.787 12:15:02 -- dd/common.sh@31 -- # xtrace_disable 00:18:08.787 12:15:02 -- common/autotest_common.sh@10 -- # set +x 00:18:09.046 [2024-04-26 12:15:02.257632] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:09.046 [2024-04-26 12:15:02.257722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:18:09.046 { 00:18:09.046 "subsystems": [ 00:18:09.046 { 00:18:09.046 "subsystem": "bdev", 00:18:09.046 "config": [ 00:18:09.046 { 00:18:09.046 "params": { 00:18:09.046 "trtype": "pcie", 00:18:09.046 "traddr": "0000:00:10.0", 00:18:09.046 "name": "Nvme0" 00:18:09.046 }, 00:18:09.046 "method": "bdev_nvme_attach_controller" 00:18:09.046 }, 00:18:09.046 { 00:18:09.046 "method": "bdev_wait_for_examine" 00:18:09.046 } 00:18:09.046 ] 00:18:09.046 } 00:18:09.046 ] 00:18:09.046 } 00:18:09.046 [2024-04-26 12:15:02.397688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.304 [2024-04-26 12:15:02.520279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.562  Copying: 4096/4096 [B] (average 4000 kBps) 00:18:09.562 00:18:09.562 12:15:02 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:18:09.562 12:15:02 -- dd/basic_rw.sh@65 -- # gen_conf 00:18:09.562 12:15:02 -- dd/common.sh@31 -- # xtrace_disable 00:18:09.562 12:15:02 -- common/autotest_common.sh@10 -- # set +x 00:18:09.562 { 00:18:09.562 "subsystems": [ 00:18:09.562 { 00:18:09.562 "subsystem": "bdev", 00:18:09.562 "config": [ 00:18:09.562 { 00:18:09.562 "params": { 00:18:09.562 "trtype": "pcie", 00:18:09.562 "traddr": "0000:00:10.0", 00:18:09.562 "name": "Nvme0" 00:18:09.562 }, 00:18:09.562 "method": "bdev_nvme_attach_controller" 00:18:09.562 }, 00:18:09.562 { 00:18:09.562 "method": "bdev_wait_for_examine" 00:18:09.562 } 00:18:09.562 ] 00:18:09.562 } 00:18:09.562 ] 00:18:09.562 } 00:18:09.562 [2024-04-26 12:15:02.983528] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:09.562 [2024-04-26 12:15:02.983637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62873 ] 00:18:09.821 [2024-04-26 12:15:03.122276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.821 [2024-04-26 12:15:03.243177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.337  Copying: 4096/4096 [B] (average 4000 kBps) 00:18:10.337 00:18:10.337 12:15:03 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:18:10.337 ************************************ 00:18:10.337 END TEST dd_rw_offset 00:18:10.337 ************************************ 00:18:10.338 12:15:03 -- dd/basic_rw.sh@72 -- # [[ b0l4aoei6wf1t6vfohub564fxa0n495ix2zdo7kt1b9v83h7rqf3v8sa0tgzj1glwbw02at80uhhi8id85fecxo17ru2mq19khe2pxmolu5f1kot9wosu5ws04mmo5h69cbzwcsssy53j0cemm4gosv3fq6ei6u7q267dh9v8etuockr2030r129vt50hxmwx2jxxrhkqhna0x0bknlvoezxkqvs9ip410592gerf0w0kmbi4bxyu5cxr5j03sjxq49cnqsebqs79dag9133g1b1nicap07xez1pp1v941v6i889n07zjjxuutqdlw1o8s6gpar7wsvwqx6psgvtb6fhtqvguxlw4abb9g4ewsxlmw7qysnp1412huz3jv5ek612dy8tofqbge4wodjfhbj5jg10896shruiw5ktncmoez10q7czrg9mwtg52xotpcqq17qj3twlfq0h01qoyrkqdu83hhye2je13nc881bvq878y29a2acmoglodizz9u4knnxm4mfal0zwoftbwptrlg9tjndl6ispmr735ptwg5eu2xlrlhltebj3pxxyu7vjmypdm13s6m579x2fgasm7539sqk84j6gqswd09bu9z0lyv6sfckjj0svd3iekm96h4eachsionyredshfi7zlan8i9tghioot2et5gxb39zm0aphz0k2xy1u74v4w8ebttd9z5uqnpa4ery53724df9260w5jftkslzignouqyb05fnmnw9gayxv41n7eubc9p7sr30swyjd4akomrsnuld14e9jc8ebbgmbxb1dq48gdqqa96jepinghmc6ypyjbihayiy763d1j63o4sjlxinjh5hcdefyz3uzk7wqjowhyc90jtv2et9qkhugz0mnejkop49ruysyqeo17u9efr8ee09s5hcimq1eob5qfidhl3werib5k5y49i485q7jkxxtvc7y9qm2riji3mm01s9c1au7yc4nsy83j3vad55nq7cqz52nvfqunl5ydoochx14qqu82d1x58rkig18f1w2zn5mkvyokq067flsmtkeu6rt51ur1ghtzrvdiw73vbp1o7klo2lcqnj56o985fmmh7afx7qxaxeoqlq0451br6n5qa58exyh64pzgsftqrkubquotobcbo4lv2lj3ntzpxy4hkd3f2pw1kurduisq1fpc7wluzvsxyddeqz5v6gnpt7sw7rdt5oodh2kyoltwurm9ox0to0vejnlqds1qpsz8ir78w6fvh8atvodq8mg4090c2n87besu9widhi5luw3vhulai6wbsch5ybznuootiss6mi0ywcfm1iqq8zavxil5mhhsitmec8sdk7j3qemtyukbm4m1whsz8ycb6trx5ckdwos945h04b9xn7i37mzri60w8u4b1cytqapex1kdntee1751z0yt5e81ufpgthbfr43weggiwcs02ou58rs71zmehdckm3i04lov6cb2kubw3jecrn0civlqppgnm9m1hybc8bbt0ij65cwyxecutblqlpomvrl7u6ggf5hqjky9vroz366esarirsgqfe3rtxvhikyas3oi9s44oj0en7xvj6zg6tjjggotyi4iw0jfovnzgxipldzhrxci0apfwsqe83gowc4iuvkr2q4fm1w31rkamvo8bs46zeibbmn5zvl31zxalkl83cpmb07xw4qsvkk5aaqub3uog0uv1w4xy2yiwuo5evcfzu4i5pxazje99a55irou7ysavzjri897ofq6wy6quujyuunua22zbpchnhhsfw7fzgh5lf1nydepnt52fjjn6b1fbr0tegfbe28w4pl4a6zc4uba68l58p698bwt54q08o3512ogqvopfhh0wr93vo2g06eu60qy37fevh893le0nz1agreghyk70n9yhpfeyapapakbu8ov7pi0epaam3g54wrtnqkbq3mrld3ou6dkgwyn82l48jg9ii8vknup62obqfazsc6f7q982pxi60jvt22c4kvlpinggzbqi2u244o2esaphqffrtfjxtracidljuhmlgvh043t3bfcmlzsmtkvxeet7x6cnsyi41mlt2c414ntopcf3c3b01nn8zej5jql4b43y8zvrmcm4na5o4h5k2wtpzj0nleynptgzzhsijyh02dmgr3e0w854v8slal58gc6ol2qqizcb9qitsp7s9ajotmp5z2dhoeh7j8qfktirafz7alwg6ma2d03rbfgxsxwrfchhf8xr36t2aaz2qvxi1wz71gzoxlyovrrzpt5t4lryqaz3qty9bdt650fv1itmb1pmq6tmp0b2gj2h7r4fjfgewquc7az9wmap2y6mqivo1zct207zkxyqwqh6n6q9knhtbyks1tgbbz35g714g0463at5xl3p0fwicel6rcdmnv07gjyo798xlan1lgo819ch7lfyok0qka381falh7th52foezt7exqy1e66mz2mgvrumz79encoyw8zaxrzzaw3v97j5fysuwdxyabgq5696o1vzv6pbidzaudw4zk3lsw8l7p0rzvf80a9ryw7b4l4mm54a8iuro6e4zek4xatywpqc6269pyz3w6n3e898dlmp5ohpwxkkctqf4coxzt9tht8krj8wgzsbp6x6ljglovfb6uo2a4rzx3yyo6un8hlq0gu7k7e0im5wc6qfhjbq5oqrsx2itqdvntbyk22l8ht2uugib7upb4xluo1atcs5nqkyg4h5pd4yk05q1rp0rne7xl2rie1hbtvdxsv2vw4cztb58256dqvl1xmcqtgpn3myd76r3ef82fhz9wjdo2id8uhzx745hnl9yo8i3ewynsfy4h2tpzn4f6451r0wbcaxcvk0xr191x61ilmjfjncakn5dw3suc70bz74ldodt1ix3x262ayn674sqn0w92t1losisx2mudqf0eru2nrn6onb4upm7kx53evgtnrxhij9dmezl1akx7zbcp1kosfpfbx0bqguao530yn7dyxpzx0opk4w15ej477p86vtgi57kgmb3ftufjsst2if781ba7iwizvmkori5hcn92ajjpgfizeuhtd7c31myvzz0bbz6i5shkk2kt60f7utz5g373fhg1omr5o9zm9e99i5j9y10l8jtflz8siabwlzwgauautgf908uivish7yy6q5ppc8pg4cs43izc6tl63h8rreaacb0q4w6f7nrisqqsuyef7e2vfaxfrl6vus4patjc2vfv2pnf1qi3opb9e6h64b1i5ktu5pcjurxioutd57zb3vze5cmi0coulvo74d6kua8jpf8h3zea7rugxyrcm670up9uulgns1776lw1c4ndjq7rte4k45obi5lgidfdyce0we8mg7uugoczczjnzcl7akx84cqa89rp5i4dk6uyo7mpkruojmn8jdehs3jqnfaxcso1n9ubfnudtrr5hf3tzbkd9uape14yxidmagamdji2v6ezipf0qwvo6m49bwfa2pvs90ci9els76zmuuxoxcws8bauia3pw50d1lwssvc6vv5qab3pvoiqfkcmzni76sjqzbxy4ha66xsfifajt4nyv5pq90p5nd8omrgvmdnt3if0105l8yowk3r2oezxf0dahxvkwijpo88wy4y7j4eeq1ezt7pb1b6v9olrpul74m2l87qtytjh10swewgcpzr253gnwe2xyyjv3rw5ej8nbm0kijpfbagcy1ccjjgf98sknv8dx5knji0q9tgtntwcsfs8zz38zmd7qp58c88l872xllwlq3hiu5mopoo1nnaczdlz8j2momv47h1fp3knppmtqnz7rid90zzy7sxn76tflhd6jor4nlck5e25l8cnkhsua6ce8uaptc4bfy5np8jaofj2mp6854fyk7c716h3g3iigykg9nnpg3vezzrr6n01jf6v5vom2mpf9gnveq84ckt097o1g8z8jk9wvd02z7iu037881w86pw06zgx34wacf8r9k2l2qwvptud46xp633afujyw0581lrixgn == \b\0\l\4\a\o\e\i\6\w\f\1\t\6\v\f\o\h\u\b\5\6\4\f\x\a\0\n\4\9\5\i\x\2\z\d\o\7\k\t\1\b\9\v\8\3\h\7\r\q\f\3\v\8\s\a\0\t\g\z\j\1\g\l\w\b\w\0\2\a\t\8\0\u\h\h\i\8\i\d\8\5\f\e\c\x\o\1\7\r\u\2\m\q\1\9\k\h\e\2\p\x\m\o\l\u\5\f\1\k\o\t\9\w\o\s\u\5\w\s\0\4\m\m\o\5\h\6\9\c\b\z\w\c\s\s\s\y\5\3\j\0\c\e\m\m\4\g\o\s\v\3\f\q\6\e\i\6\u\7\q\2\6\7\d\h\9\v\8\e\t\u\o\c\k\r\2\0\3\0\r\1\2\9\v\t\5\0\h\x\m\w\x\2\j\x\x\r\h\k\q\h\n\a\0\x\0\b\k\n\l\v\o\e\z\x\k\q\v\s\9\i\p\4\1\0\5\9\2\g\e\r\f\0\w\0\k\m\b\i\4\b\x\y\u\5\c\x\r\5\j\0\3\s\j\x\q\4\9\c\n\q\s\e\b\q\s\7\9\d\a\g\9\1\3\3\g\1\b\1\n\i\c\a\p\0\7\x\e\z\1\p\p\1\v\9\4\1\v\6\i\8\8\9\n\0\7\z\j\j\x\u\u\t\q\d\l\w\1\o\8\s\6\g\p\a\r\7\w\s\v\w\q\x\6\p\s\g\v\t\b\6\f\h\t\q\v\g\u\x\l\w\4\a\b\b\9\g\4\e\w\s\x\l\m\w\7\q\y\s\n\p\1\4\1\2\h\u\z\3\j\v\5\e\k\6\1\2\d\y\8\t\o\f\q\b\g\e\4\w\o\d\j\f\h\b\j\5\j\g\1\0\8\9\6\s\h\r\u\i\w\5\k\t\n\c\m\o\e\z\1\0\q\7\c\z\r\g\9\m\w\t\g\5\2\x\o\t\p\c\q\q\1\7\q\j\3\t\w\l\f\q\0\h\0\1\q\o\y\r\k\q\d\u\8\3\h\h\y\e\2\j\e\1\3\n\c\8\8\1\b\v\q\8\7\8\y\2\9\a\2\a\c\m\o\g\l\o\d\i\z\z\9\u\4\k\n\n\x\m\4\m\f\a\l\0\z\w\o\f\t\b\w\p\t\r\l\g\9\t\j\n\d\l\6\i\s\p\m\r\7\3\5\p\t\w\g\5\e\u\2\x\l\r\l\h\l\t\e\b\j\3\p\x\x\y\u\7\v\j\m\y\p\d\m\1\3\s\6\m\5\7\9\x\2\f\g\a\s\m\7\5\3\9\s\q\k\8\4\j\6\g\q\s\w\d\0\9\b\u\9\z\0\l\y\v\6\s\f\c\k\j\j\0\s\v\d\3\i\e\k\m\9\6\h\4\e\a\c\h\s\i\o\n\y\r\e\d\s\h\f\i\7\z\l\a\n\8\i\9\t\g\h\i\o\o\t\2\e\t\5\g\x\b\3\9\z\m\0\a\p\h\z\0\k\2\x\y\1\u\7\4\v\4\w\8\e\b\t\t\d\9\z\5\u\q\n\p\a\4\e\r\y\5\3\7\2\4\d\f\9\2\6\0\w\5\j\f\t\k\s\l\z\i\g\n\o\u\q\y\b\0\5\f\n\m\n\w\9\g\a\y\x\v\4\1\n\7\e\u\b\c\9\p\7\s\r\3\0\s\w\y\j\d\4\a\k\o\m\r\s\n\u\l\d\1\4\e\9\j\c\8\e\b\b\g\m\b\x\b\1\d\q\4\8\g\d\q\q\a\9\6\j\e\p\i\n\g\h\m\c\6\y\p\y\j\b\i\h\a\y\i\y\7\6\3\d\1\j\6\3\o\4\s\j\l\x\i\n\j\h\5\h\c\d\e\f\y\z\3\u\z\k\7\w\q\j\o\w\h\y\c\9\0\j\t\v\2\e\t\9\q\k\h\u\g\z\0\m\n\e\j\k\o\p\4\9\r\u\y\s\y\q\e\o\1\7\u\9\e\f\r\8\e\e\0\9\s\5\h\c\i\m\q\1\e\o\b\5\q\f\i\d\h\l\3\w\e\r\i\b\5\k\5\y\4\9\i\4\8\5\q\7\j\k\x\x\t\v\c\7\y\9\q\m\2\r\i\j\i\3\m\m\0\1\s\9\c\1\a\u\7\y\c\4\n\s\y\8\3\j\3\v\a\d\5\5\n\q\7\c\q\z\5\2\n\v\f\q\u\n\l\5\y\d\o\o\c\h\x\1\4\q\q\u\8\2\d\1\x\5\8\r\k\i\g\1\8\f\1\w\2\z\n\5\m\k\v\y\o\k\q\0\6\7\f\l\s\m\t\k\e\u\6\r\t\5\1\u\r\1\g\h\t\z\r\v\d\i\w\7\3\v\b\p\1\o\7\k\l\o\2\l\c\q\n\j\5\6\o\9\8\5\f\m\m\h\7\a\f\x\7\q\x\a\x\e\o\q\l\q\0\4\5\1\b\r\6\n\5\q\a\5\8\e\x\y\h\6\4\p\z\g\s\f\t\q\r\k\u\b\q\u\o\t\o\b\c\b\o\4\l\v\2\l\j\3\n\t\z\p\x\y\4\h\k\d\3\f\2\p\w\1\k\u\r\d\u\i\s\q\1\f\p\c\7\w\l\u\z\v\s\x\y\d\d\e\q\z\5\v\6\g\n\p\t\7\s\w\7\r\d\t\5\o\o\d\h\2\k\y\o\l\t\w\u\r\m\9\o\x\0\t\o\0\v\e\j\n\l\q\d\s\1\q\p\s\z\8\i\r\7\8\w\6\f\v\h\8\a\t\v\o\d\q\8\m\g\4\0\9\0\c\2\n\8\7\b\e\s\u\9\w\i\d\h\i\5\l\u\w\3\v\h\u\l\a\i\6\w\b\s\c\h\5\y\b\z\n\u\o\o\t\i\s\s\6\m\i\0\y\w\c\f\m\1\i\q\q\8\z\a\v\x\i\l\5\m\h\h\s\i\t\m\e\c\8\s\d\k\7\j\3\q\e\m\t\y\u\k\b\m\4\m\1\w\h\s\z\8\y\c\b\6\t\r\x\5\c\k\d\w\o\s\9\4\5\h\0\4\b\9\x\n\7\i\3\7\m\z\r\i\6\0\w\8\u\4\b\1\c\y\t\q\a\p\e\x\1\k\d\n\t\e\e\1\7\5\1\z\0\y\t\5\e\8\1\u\f\p\g\t\h\b\f\r\4\3\w\e\g\g\i\w\c\s\0\2\o\u\5\8\r\s\7\1\z\m\e\h\d\c\k\m\3\i\0\4\l\o\v\6\c\b\2\k\u\b\w\3\j\e\c\r\n\0\c\i\v\l\q\p\p\g\n\m\9\m\1\h\y\b\c\8\b\b\t\0\i\j\6\5\c\w\y\x\e\c\u\t\b\l\q\l\p\o\m\v\r\l\7\u\6\g\g\f\5\h\q\j\k\y\9\v\r\o\z\3\6\6\e\s\a\r\i\r\s\g\q\f\e\3\r\t\x\v\h\i\k\y\a\s\3\o\i\9\s\4\4\o\j\0\e\n\7\x\v\j\6\z\g\6\t\j\j\g\g\o\t\y\i\4\i\w\0\j\f\o\v\n\z\g\x\i\p\l\d\z\h\r\x\c\i\0\a\p\f\w\s\q\e\8\3\g\o\w\c\4\i\u\v\k\r\2\q\4\f\m\1\w\3\1\r\k\a\m\v\o\8\b\s\4\6\z\e\i\b\b\m\n\5\z\v\l\3\1\z\x\a\l\k\l\8\3\c\p\m\b\0\7\x\w\4\q\s\v\k\k\5\a\a\q\u\b\3\u\o\g\0\u\v\1\w\4\x\y\2\y\i\w\u\o\5\e\v\c\f\z\u\4\i\5\p\x\a\z\j\e\9\9\a\5\5\i\r\o\u\7\y\s\a\v\z\j\r\i\8\9\7\o\f\q\6\w\y\6\q\u\u\j\y\u\u\n\u\a\2\2\z\b\p\c\h\n\h\h\s\f\w\7\f\z\g\h\5\l\f\1\n\y\d\e\p\n\t\5\2\f\j\j\n\6\b\1\f\b\r\0\t\e\g\f\b\e\2\8\w\4\p\l\4\a\6\z\c\4\u\b\a\6\8\l\5\8\p\6\9\8\b\w\t\5\4\q\0\8\o\3\5\1\2\o\g\q\v\o\p\f\h\h\0\w\r\9\3\v\o\2\g\0\6\e\u\6\0\q\y\3\7\f\e\v\h\8\9\3\l\e\0\n\z\1\a\g\r\e\g\h\y\k\7\0\n\9\y\h\p\f\e\y\a\p\a\p\a\k\b\u\8\o\v\7\p\i\0\e\p\a\a\m\3\g\5\4\w\r\t\n\q\k\b\q\3\m\r\l\d\3\o\u\6\d\k\g\w\y\n\8\2\l\4\8\j\g\9\i\i\8\v\k\n\u\p\6\2\o\b\q\f\a\z\s\c\6\f\7\q\9\8\2\p\x\i\6\0\j\v\t\2\2\c\4\k\v\l\p\i\n\g\g\z\b\q\i\2\u\2\4\4\o\2\e\s\a\p\h\q\f\f\r\t\f\j\x\t\r\a\c\i\d\l\j\u\h\m\l\g\v\h\0\4\3\t\3\b\f\c\m\l\z\s\m\t\k\v\x\e\e\t\7\x\6\c\n\s\y\i\4\1\m\l\t\2\c\4\1\4\n\t\o\p\c\f\3\c\3\b\0\1\n\n\8\z\e\j\5\j\q\l\4\b\4\3\y\8\z\v\r\m\c\m\4\n\a\5\o\4\h\5\k\2\w\t\p\z\j\0\n\l\e\y\n\p\t\g\z\z\h\s\i\j\y\h\0\2\d\m\g\r\3\e\0\w\8\5\4\v\8\s\l\a\l\5\8\g\c\6\o\l\2\q\q\i\z\c\b\9\q\i\t\s\p\7\s\9\a\j\o\t\m\p\5\z\2\d\h\o\e\h\7\j\8\q\f\k\t\i\r\a\f\z\7\a\l\w\g\6\m\a\2\d\0\3\r\b\f\g\x\s\x\w\r\f\c\h\h\f\8\x\r\3\6\t\2\a\a\z\2\q\v\x\i\1\w\z\7\1\g\z\o\x\l\y\o\v\r\r\z\p\t\5\t\4\l\r\y\q\a\z\3\q\t\y\9\b\d\t\6\5\0\f\v\1\i\t\m\b\1\p\m\q\6\t\m\p\0\b\2\g\j\2\h\7\r\4\f\j\f\g\e\w\q\u\c\7\a\z\9\w\m\a\p\2\y\6\m\q\i\v\o\1\z\c\t\2\0\7\z\k\x\y\q\w\q\h\6\n\6\q\9\k\n\h\t\b\y\k\s\1\t\g\b\b\z\3\5\g\7\1\4\g\0\4\6\3\a\t\5\x\l\3\p\0\f\w\i\c\e\l\6\r\c\d\m\n\v\0\7\g\j\y\o\7\9\8\x\l\a\n\1\l\g\o\8\1\9\c\h\7\l\f\y\o\k\0\q\k\a\3\8\1\f\a\l\h\7\t\h\5\2\f\o\e\z\t\7\e\x\q\y\1\e\6\6\m\z\2\m\g\v\r\u\m\z\7\9\e\n\c\o\y\w\8\z\a\x\r\z\z\a\w\3\v\9\7\j\5\f\y\s\u\w\d\x\y\a\b\g\q\5\6\9\6\o\1\v\z\v\6\p\b\i\d\z\a\u\d\w\4\z\k\3\l\s\w\8\l\7\p\0\r\z\v\f\8\0\a\9\r\y\w\7\b\4\l\4\m\m\5\4\a\8\i\u\r\o\6\e\4\z\e\k\4\x\a\t\y\w\p\q\c\6\2\6\9\p\y\z\3\w\6\n\3\e\8\9\8\d\l\m\p\5\o\h\p\w\x\k\k\c\t\q\f\4\c\o\x\z\t\9\t\h\t\8\k\r\j\8\w\g\z\s\b\p\6\x\6\l\j\g\l\o\v\f\b\6\u\o\2\a\4\r\z\x\3\y\y\o\6\u\n\8\h\l\q\0\g\u\7\k\7\e\0\i\m\5\w\c\6\q\f\h\j\b\q\5\o\q\r\s\x\2\i\t\q\d\v\n\t\b\y\k\2\2\l\8\h\t\2\u\u\g\i\b\7\u\p\b\4\x\l\u\o\1\a\t\c\s\5\n\q\k\y\g\4\h\5\p\d\4\y\k\0\5\q\1\r\p\0\r\n\e\7\x\l\2\r\i\e\1\h\b\t\v\d\x\s\v\2\v\w\4\c\z\t\b\5\8\2\5\6\d\q\v\l\1\x\m\c\q\t\g\p\n\3\m\y\d\7\6\r\3\e\f\8\2\f\h\z\9\w\j\d\o\2\i\d\8\u\h\z\x\7\4\5\h\n\l\9\y\o\8\i\3\e\w\y\n\s\f\y\4\h\2\t\p\z\n\4\f\6\4\5\1\r\0\w\b\c\a\x\c\v\k\0\x\r\1\9\1\x\6\1\i\l\m\j\f\j\n\c\a\k\n\5\d\w\3\s\u\c\7\0\b\z\7\4\l\d\o\d\t\1\i\x\3\x\2\6\2\a\y\n\6\7\4\s\q\n\0\w\9\2\t\1\l\o\s\i\s\x\2\m\u\d\q\f\0\e\r\u\2\n\r\n\6\o\n\b\4\u\p\m\7\k\x\5\3\e\v\g\t\n\r\x\h\i\j\9\d\m\e\z\l\1\a\k\x\7\z\b\c\p\1\k\o\s\f\p\f\b\x\0\b\q\g\u\a\o\5\3\0\y\n\7\d\y\x\p\z\x\0\o\p\k\4\w\1\5\e\j\4\7\7\p\8\6\v\t\g\i\5\7\k\g\m\b\3\f\t\u\f\j\s\s\t\2\i\f\7\8\1\b\a\7\i\w\i\z\v\m\k\o\r\i\5\h\c\n\9\2\a\j\j\p\g\f\i\z\e\u\h\t\d\7\c\3\1\m\y\v\z\z\0\b\b\z\6\i\5\s\h\k\k\2\k\t\6\0\f\7\u\t\z\5\g\3\7\3\f\h\g\1\o\m\r\5\o\9\z\m\9\e\9\9\i\5\j\9\y\1\0\l\8\j\t\f\l\z\8\s\i\a\b\w\l\z\w\g\a\u\a\u\t\g\f\9\0\8\u\i\v\i\s\h\7\y\y\6\q\5\p\p\c\8\p\g\4\c\s\4\3\i\z\c\6\t\l\6\3\h\8\r\r\e\a\a\c\b\0\q\4\w\6\f\7\n\r\i\s\q\q\s\u\y\e\f\7\e\2\v\f\a\x\f\r\l\6\v\u\s\4\p\a\t\j\c\2\v\f\v\2\p\n\f\1\q\i\3\o\p\b\9\e\6\h\6\4\b\1\i\5\k\t\u\5\p\c\j\u\r\x\i\o\u\t\d\5\7\z\b\3\v\z\e\5\c\m\i\0\c\o\u\l\v\o\7\4\d\6\k\u\a\8\j\p\f\8\h\3\z\e\a\7\r\u\g\x\y\r\c\m\6\7\0\u\p\9\u\u\l\g\n\s\1\7\7\6\l\w\1\c\4\n\d\j\q\7\r\t\e\4\k\4\5\o\b\i\5\l\g\i\d\f\d\y\c\e\0\w\e\8\m\g\7\u\u\g\o\c\z\c\z\j\n\z\c\l\7\a\k\x\8\4\c\q\a\8\9\r\p\5\i\4\d\k\6\u\y\o\7\m\p\k\r\u\o\j\m\n\8\j\d\e\h\s\3\j\q\n\f\a\x\c\s\o\1\n\9\u\b\f\n\u\d\t\r\r\5\h\f\3\t\z\b\k\d\9\u\a\p\e\1\4\y\x\i\d\m\a\g\a\m\d\j\i\2\v\6\e\z\i\p\f\0\q\w\v\o\6\m\4\9\b\w\f\a\2\p\v\s\9\0\c\i\9\e\l\s\7\6\z\m\u\u\x\o\x\c\w\s\8\b\a\u\i\a\3\p\w\5\0\d\1\l\w\s\s\v\c\6\v\v\5\q\a\b\3\p\v\o\i\q\f\k\c\m\z\n\i\7\6\s\j\q\z\b\x\y\4\h\a\6\6\x\s\f\i\f\a\j\t\4\n\y\v\5\p\q\9\0\p\5\n\d\8\o\m\r\g\v\m\d\n\t\3\i\f\0\1\0\5\l\8\y\o\w\k\3\r\2\o\e\z\x\f\0\d\a\h\x\v\k\w\i\j\p\o\8\8\w\y\4\y\7\j\4\e\e\q\1\e\z\t\7\p\b\1\b\6\v\9\o\l\r\p\u\l\7\4\m\2\l\8\7\q\t\y\t\j\h\1\0\s\w\e\w\g\c\p\z\r\2\5\3\g\n\w\e\2\x\y\y\j\v\3\r\w\5\e\j\8\n\b\m\0\k\i\j\p\f\b\a\g\c\y\1\c\c\j\j\g\f\9\8\s\k\n\v\8\d\x\5\k\n\j\i\0\q\9\t\g\t\n\t\w\c\s\f\s\8\z\z\3\8\z\m\d\7\q\p\5\8\c\8\8\l\8\7\2\x\l\l\w\l\q\3\h\i\u\5\m\o\p\o\o\1\n\n\a\c\z\d\l\z\8\j\2\m\o\m\v\4\7\h\1\f\p\3\k\n\p\p\m\t\q\n\z\7\r\i\d\9\0\z\z\y\7\s\x\n\7\6\t\f\l\h\d\6\j\o\r\4\n\l\c\k\5\e\2\5\l\8\c\n\k\h\s\u\a\6\c\e\8\u\a\p\t\c\4\b\f\y\5\n\p\8\j\a\o\f\j\2\m\p\6\8\5\4\f\y\k\7\c\7\1\6\h\3\g\3\i\i\g\y\k\g\9\n\n\p\g\3\v\e\z\z\r\r\6\n\0\1\j\f\6\v\5\v\o\m\2\m\p\f\9\g\n\v\e\q\8\4\c\k\t\0\9\7\o\1\g\8\z\8\j\k\9\w\v\d\0\2\z\7\i\u\0\3\7\8\8\1\w\8\6\p\w\0\6\z\g\x\3\4\w\a\c\f\8\r\9\k\2\l\2\q\w\v\p\t\u\d\4\6\x\p\6\3\3\a\f\u\j\y\w\0\5\8\1\l\r\i\x\g\n ]] 00:18:10.338 00:18:10.338 real 0m1.504s 00:18:10.338 user 0m1.088s 00:18:10.338 sys 0m0.589s 00:18:10.338 12:15:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:10.338 12:15:03 -- common/autotest_common.sh@10 -- # set +x 00:18:10.338 12:15:03 -- dd/basic_rw.sh@1 -- # cleanup 00:18:10.338 12:15:03 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:18:10.338 12:15:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:10.338 12:15:03 -- dd/common.sh@11 -- # local nvme_ref= 00:18:10.338 12:15:03 -- dd/common.sh@12 -- # local size=0xffff 00:18:10.338 12:15:03 -- dd/common.sh@14 -- # local bs=1048576 00:18:10.338 12:15:03 -- dd/common.sh@15 -- # local count=1 00:18:10.338 12:15:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:18:10.338 12:15:03 -- dd/common.sh@18 -- # gen_conf 00:18:10.338 12:15:03 -- dd/common.sh@31 -- # xtrace_disable 00:18:10.338 12:15:03 -- common/autotest_common.sh@10 -- # set +x 00:18:10.338 [2024-04-26 12:15:03.767996] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:10.338 [2024-04-26 12:15:03.768087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62902 ] 00:18:10.338 { 00:18:10.338 "subsystems": [ 00:18:10.338 { 00:18:10.338 "subsystem": "bdev", 00:18:10.338 "config": [ 00:18:10.338 { 00:18:10.338 "params": { 00:18:10.338 "trtype": "pcie", 00:18:10.338 "traddr": "0000:00:10.0", 00:18:10.338 "name": "Nvme0" 00:18:10.338 }, 00:18:10.338 "method": "bdev_nvme_attach_controller" 00:18:10.338 }, 00:18:10.338 { 00:18:10.338 "method": "bdev_wait_for_examine" 00:18:10.338 } 00:18:10.338 ] 00:18:10.338 } 00:18:10.338 ] 00:18:10.338 } 00:18:10.596 [2024-04-26 12:15:03.904553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.596 [2024-04-26 12:15:04.022114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.114  Copying: 1024/1024 [kB] (average 500 MBps) 00:18:11.114 00:18:11.114 12:15:04 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:11.114 00:18:11.114 real 0m20.208s 00:18:11.114 user 0m14.903s 00:18:11.114 sys 0m6.858s 00:18:11.114 12:15:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:11.114 ************************************ 00:18:11.114 END TEST spdk_dd_basic_rw 00:18:11.114 ************************************ 00:18:11.114 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.114 12:15:04 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:18:11.114 12:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:11.114 12:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.114 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.114 ************************************ 00:18:11.114 START TEST spdk_dd_posix 00:18:11.114 ************************************ 00:18:11.115 12:15:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:18:11.374 * Looking for test storage... 00:18:11.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:11.374 12:15:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.374 12:15:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.374 12:15:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.374 12:15:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.374 12:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.374 12:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.374 12:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.374 12:15:04 -- paths/export.sh@5 -- # export PATH 00:18:11.374 12:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.374 12:15:04 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:18:11.374 12:15:04 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:18:11.374 12:15:04 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:18:11.374 12:15:04 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:18:11.374 12:15:04 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:11.374 12:15:04 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:11.374 12:15:04 -- dd/posix.sh@130 -- # tests 00:18:11.374 12:15:04 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:18:11.374 * First test run, liburing in use 00:18:11.374 12:15:04 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:18:11.374 12:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:11.374 12:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.374 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.374 ************************************ 00:18:11.374 START TEST dd_flag_append 00:18:11.374 ************************************ 00:18:11.374 12:15:04 -- common/autotest_common.sh@1111 -- # append 00:18:11.374 12:15:04 -- dd/posix.sh@16 -- # local dump0 00:18:11.374 12:15:04 -- dd/posix.sh@17 -- # local dump1 00:18:11.374 12:15:04 -- dd/posix.sh@19 -- # gen_bytes 32 00:18:11.374 12:15:04 -- dd/common.sh@98 -- # xtrace_disable 00:18:11.374 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.374 12:15:04 -- dd/posix.sh@19 -- # dump0=22k3wl5x4ethg513vhp0v149teq9zqk6 00:18:11.374 12:15:04 -- dd/posix.sh@20 -- # gen_bytes 32 00:18:11.374 12:15:04 -- dd/common.sh@98 -- # xtrace_disable 00:18:11.374 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.374 12:15:04 -- dd/posix.sh@20 -- # dump1=1jwhmynwhfrnth58it6x8xgo8fewz2k4 00:18:11.374 12:15:04 -- dd/posix.sh@22 -- # printf %s 22k3wl5x4ethg513vhp0v149teq9zqk6 00:18:11.374 12:15:04 -- dd/posix.sh@23 -- # printf %s 1jwhmynwhfrnth58it6x8xgo8fewz2k4 00:18:11.374 12:15:04 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:11.374 [2024-04-26 12:15:04.779423] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:11.374 [2024-04-26 12:15:04.779494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:18:11.632 [2024-04-26 12:15:04.911723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.632 [2024-04-26 12:15:05.025222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.891  Copying: 32/32 [B] (average 31 kBps) 00:18:11.891 00:18:11.891 12:15:05 -- dd/posix.sh@27 -- # [[ 1jwhmynwhfrnth58it6x8xgo8fewz2k422k3wl5x4ethg513vhp0v149teq9zqk6 == \1\j\w\h\m\y\n\w\h\f\r\n\t\h\5\8\i\t\6\x\8\x\g\o\8\f\e\w\z\2\k\4\2\2\k\3\w\l\5\x\4\e\t\h\g\5\1\3\v\h\p\0\v\1\4\9\t\e\q\9\z\q\k\6 ]] 00:18:11.891 00:18:11.891 real 0m0.632s 00:18:11.891 user 0m0.368s 00:18:11.891 sys 0m0.292s 00:18:11.891 12:15:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:12.150 ************************************ 00:18:12.150 END TEST dd_flag_append 00:18:12.150 ************************************ 00:18:12.150 12:15:05 -- common/autotest_common.sh@10 -- # set +x 00:18:12.150 12:15:05 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:18:12.150 12:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:12.150 12:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:12.150 12:15:05 -- common/autotest_common.sh@10 -- # set +x 00:18:12.150 ************************************ 00:18:12.150 START TEST dd_flag_directory 00:18:12.150 ************************************ 00:18:12.150 12:15:05 -- common/autotest_common.sh@1111 -- # directory 00:18:12.150 12:15:05 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:12.150 12:15:05 -- common/autotest_common.sh@638 -- # local es=0 00:18:12.150 12:15:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:12.150 12:15:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.150 12:15:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:12.150 12:15:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.150 12:15:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:12.150 12:15:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.150 12:15:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:12.150 12:15:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.150 12:15:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:12.150 12:15:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:12.150 [2024-04-26 12:15:05.540813] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:12.150 [2024-04-26 12:15:05.540939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63008 ] 00:18:12.417 [2024-04-26 12:15:05.680507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.418 [2024-04-26 12:15:05.789485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.681 [2024-04-26 12:15:05.881559] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:12.681 [2024-04-26 12:15:05.881624] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:12.681 [2024-04-26 12:15:05.881645] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:12.681 [2024-04-26 12:15:06.003367] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:12.681 12:15:06 -- common/autotest_common.sh@641 -- # es=236 00:18:12.681 12:15:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:12.681 12:15:06 -- common/autotest_common.sh@650 -- # es=108 00:18:12.681 12:15:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:12.681 12:15:06 -- common/autotest_common.sh@658 -- # es=1 00:18:12.681 12:15:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:12.681 12:15:06 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:12.681 12:15:06 -- common/autotest_common.sh@638 -- # local es=0 00:18:12.681 12:15:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:12.681 12:15:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.681 12:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:12.681 12:15:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.681 12:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:12.681 12:15:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.681 12:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:12.681 12:15:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:12.681 12:15:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:12.681 12:15:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:12.939 [2024-04-26 12:15:06.185046] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:12.939 [2024-04-26 12:15:06.185199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63023 ] 00:18:12.939 [2024-04-26 12:15:06.321707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.227 [2024-04-26 12:15:06.425320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.227 [2024-04-26 12:15:06.515040] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:13.227 [2024-04-26 12:15:06.515103] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:13.227 [2024-04-26 12:15:06.515131] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:13.227 [2024-04-26 12:15:06.630691] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:13.485 12:15:06 -- common/autotest_common.sh@641 -- # es=236 00:18:13.485 12:15:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:13.485 12:15:06 -- common/autotest_common.sh@650 -- # es=108 00:18:13.485 12:15:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:13.485 12:15:06 -- common/autotest_common.sh@658 -- # es=1 00:18:13.485 12:15:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:13.485 00:18:13.485 real 0m1.267s 00:18:13.485 user 0m0.765s 00:18:13.485 sys 0m0.292s 00:18:13.485 12:15:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.485 12:15:06 -- common/autotest_common.sh@10 -- # set +x 00:18:13.485 ************************************ 00:18:13.485 END TEST dd_flag_directory 00:18:13.485 ************************************ 00:18:13.485 12:15:06 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:18:13.485 12:15:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:13.485 12:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.485 12:15:06 -- common/autotest_common.sh@10 -- # set +x 00:18:13.485 ************************************ 00:18:13.485 START TEST dd_flag_nofollow 00:18:13.485 ************************************ 00:18:13.485 12:15:06 -- common/autotest_common.sh@1111 -- # nofollow 00:18:13.485 12:15:06 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:13.485 12:15:06 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:13.485 12:15:06 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:13.485 12:15:06 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:13.485 12:15:06 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:13.485 12:15:06 -- common/autotest_common.sh@638 -- # local es=0 00:18:13.485 12:15:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:13.485 12:15:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.485 12:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.485 12:15:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.485 12:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.485 12:15:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.485 12:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.485 12:15:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:13.485 12:15:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:13.485 12:15:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:13.485 [2024-04-26 12:15:06.917778] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:13.486 [2024-04-26 12:15:06.917892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63062 ] 00:18:13.744 [2024-04-26 12:15:07.056634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.744 [2024-04-26 12:15:07.153011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.003 [2024-04-26 12:15:07.237021] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:14.003 [2024-04-26 12:15:07.237096] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:14.003 [2024-04-26 12:15:07.237131] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:14.003 [2024-04-26 12:15:07.350955] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:14.262 12:15:07 -- common/autotest_common.sh@641 -- # es=216 00:18:14.262 12:15:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:14.262 12:15:07 -- common/autotest_common.sh@650 -- # es=88 00:18:14.262 12:15:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:14.262 12:15:07 -- common/autotest_common.sh@658 -- # es=1 00:18:14.262 12:15:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:14.262 12:15:07 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:14.262 12:15:07 -- common/autotest_common.sh@638 -- # local es=0 00:18:14.262 12:15:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:14.262 12:15:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.262 12:15:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.262 12:15:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.262 12:15:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.262 12:15:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.262 12:15:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.262 12:15:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.262 12:15:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:14.262 12:15:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:14.262 [2024-04-26 12:15:07.532608] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:14.262 [2024-04-26 12:15:07.532729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63066 ] 00:18:14.262 [2024-04-26 12:15:07.669967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.521 [2024-04-26 12:15:07.771715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.521 [2024-04-26 12:15:07.861069] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:14.521 [2024-04-26 12:15:07.861136] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:14.521 [2024-04-26 12:15:07.861170] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:14.521 [2024-04-26 12:15:07.977191] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:14.780 12:15:08 -- common/autotest_common.sh@641 -- # es=216 00:18:14.780 12:15:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:14.780 12:15:08 -- common/autotest_common.sh@650 -- # es=88 00:18:14.780 12:15:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:14.780 12:15:08 -- common/autotest_common.sh@658 -- # es=1 00:18:14.780 12:15:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:14.780 12:15:08 -- dd/posix.sh@46 -- # gen_bytes 512 00:18:14.780 12:15:08 -- dd/common.sh@98 -- # xtrace_disable 00:18:14.780 12:15:08 -- common/autotest_common.sh@10 -- # set +x 00:18:14.780 12:15:08 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:14.780 [2024-04-26 12:15:08.159012] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:14.780 [2024-04-26 12:15:08.159155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63079 ] 00:18:15.039 [2024-04-26 12:15:08.297751] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.039 [2024-04-26 12:15:08.412855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.297  Copying: 512/512 [B] (average 500 kBps) 00:18:15.297 00:18:15.297 12:15:08 -- dd/posix.sh@49 -- # [[ y4hkiab5fbd9sr3uotl45s84cufgaa815hfyh3zx27w2zot2jryht1dkfasovu2i8eg8zwdz2k0ujjc8d4lfkwscz2050ienl8ue5t4nsnv8zp09vrafq7ky46disi4z3uiz8wz11jtwy42o6ly2ken0sjlxprrcqibfj6tg3c31d5pqvsg5rl0s3tjs3okwylovect1223pbmumboi9kmvjmlccgynqew925fo2zof3j1pb00wl03zxbm3n7tbxmhccpi9715rfx4zed9hxpuf87sxi1b7bbrg2hjx6aq7jg5w7v61imkku3khuni0avagjnzqwgqd7y6nt97olvbgzi063mvcv24qtl46ig4j4zgbr0i26vkadfg03bjetfdm2pjti24cysrjbvsjwnwr7e6q1gyrmui1heou21f5fcqo2oqkfeitkm2kyhrndlb1blxto2s1enbesxlhcbkpb7sb608wnki2mr9iijg7zfjek0jkc9sxyi7v79dj7 == \y\4\h\k\i\a\b\5\f\b\d\9\s\r\3\u\o\t\l\4\5\s\8\4\c\u\f\g\a\a\8\1\5\h\f\y\h\3\z\x\2\7\w\2\z\o\t\2\j\r\y\h\t\1\d\k\f\a\s\o\v\u\2\i\8\e\g\8\z\w\d\z\2\k\0\u\j\j\c\8\d\4\l\f\k\w\s\c\z\2\0\5\0\i\e\n\l\8\u\e\5\t\4\n\s\n\v\8\z\p\0\9\v\r\a\f\q\7\k\y\4\6\d\i\s\i\4\z\3\u\i\z\8\w\z\1\1\j\t\w\y\4\2\o\6\l\y\2\k\e\n\0\s\j\l\x\p\r\r\c\q\i\b\f\j\6\t\g\3\c\3\1\d\5\p\q\v\s\g\5\r\l\0\s\3\t\j\s\3\o\k\w\y\l\o\v\e\c\t\1\2\2\3\p\b\m\u\m\b\o\i\9\k\m\v\j\m\l\c\c\g\y\n\q\e\w\9\2\5\f\o\2\z\o\f\3\j\1\p\b\0\0\w\l\0\3\z\x\b\m\3\n\7\t\b\x\m\h\c\c\p\i\9\7\1\5\r\f\x\4\z\e\d\9\h\x\p\u\f\8\7\s\x\i\1\b\7\b\b\r\g\2\h\j\x\6\a\q\7\j\g\5\w\7\v\6\1\i\m\k\k\u\3\k\h\u\n\i\0\a\v\a\g\j\n\z\q\w\g\q\d\7\y\6\n\t\9\7\o\l\v\b\g\z\i\0\6\3\m\v\c\v\2\4\q\t\l\4\6\i\g\4\j\4\z\g\b\r\0\i\2\6\v\k\a\d\f\g\0\3\b\j\e\t\f\d\m\2\p\j\t\i\2\4\c\y\s\r\j\b\v\s\j\w\n\w\r\7\e\6\q\1\g\y\r\m\u\i\1\h\e\o\u\2\1\f\5\f\c\q\o\2\o\q\k\f\e\i\t\k\m\2\k\y\h\r\n\d\l\b\1\b\l\x\t\o\2\s\1\e\n\b\e\s\x\l\h\c\b\k\p\b\7\s\b\6\0\8\w\n\k\i\2\m\r\9\i\i\j\g\7\z\f\j\e\k\0\j\k\c\9\s\x\y\i\7\v\7\9\d\j\7 ]] 00:18:15.297 00:18:15.297 real 0m1.882s 00:18:15.297 user 0m1.124s 00:18:15.297 sys 0m0.579s 00:18:15.297 ************************************ 00:18:15.297 END TEST dd_flag_nofollow 00:18:15.297 ************************************ 00:18:15.297 12:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:15.297 12:15:08 -- common/autotest_common.sh@10 -- # set +x 00:18:15.555 12:15:08 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:18:15.555 12:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:15.555 12:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.555 12:15:08 -- common/autotest_common.sh@10 -- # set +x 00:18:15.555 ************************************ 00:18:15.555 START TEST dd_flag_noatime 00:18:15.555 ************************************ 00:18:15.555 12:15:08 -- common/autotest_common.sh@1111 -- # noatime 00:18:15.555 12:15:08 -- dd/posix.sh@53 -- # local atime_if 00:18:15.555 12:15:08 -- dd/posix.sh@54 -- # local atime_of 00:18:15.555 12:15:08 -- dd/posix.sh@58 -- # gen_bytes 512 00:18:15.555 12:15:08 -- dd/common.sh@98 -- # xtrace_disable 00:18:15.555 12:15:08 -- common/autotest_common.sh@10 -- # set +x 00:18:15.555 12:15:08 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:15.555 12:15:08 -- dd/posix.sh@60 -- # atime_if=1714133708 00:18:15.555 12:15:08 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:15.555 12:15:08 -- dd/posix.sh@61 -- # atime_of=1714133708 00:18:15.555 12:15:08 -- dd/posix.sh@66 -- # sleep 1 00:18:16.489 12:15:09 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:16.489 [2024-04-26 12:15:09.928848] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:16.489 [2024-04-26 12:15:09.928986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63125 ] 00:18:16.748 [2024-04-26 12:15:10.064942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.748 [2024-04-26 12:15:10.177200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.263  Copying: 512/512 [B] (average 500 kBps) 00:18:17.263 00:18:17.263 12:15:10 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:17.264 12:15:10 -- dd/posix.sh@69 -- # (( atime_if == 1714133708 )) 00:18:17.264 12:15:10 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:17.264 12:15:10 -- dd/posix.sh@70 -- # (( atime_of == 1714133708 )) 00:18:17.264 12:15:10 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:17.264 [2024-04-26 12:15:10.568152] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:17.264 [2024-04-26 12:15:10.568298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63139 ] 00:18:17.264 [2024-04-26 12:15:10.708578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.522 [2024-04-26 12:15:10.819652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.780  Copying: 512/512 [B] (average 500 kBps) 00:18:17.780 00:18:17.780 12:15:11 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:17.780 12:15:11 -- dd/posix.sh@73 -- # (( atime_if < 1714133710 )) 00:18:17.780 00:18:17.780 real 0m2.304s 00:18:17.780 user 0m0.766s 00:18:17.780 sys 0m0.572s 00:18:17.780 12:15:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:17.780 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:18:17.780 ************************************ 00:18:17.780 END TEST dd_flag_noatime 00:18:17.780 ************************************ 00:18:17.780 12:15:11 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:18:17.780 12:15:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:17.780 12:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.780 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:18:18.045 ************************************ 00:18:18.046 START TEST dd_flags_misc 00:18:18.046 ************************************ 00:18:18.046 12:15:11 -- common/autotest_common.sh@1111 -- # io 00:18:18.046 12:15:11 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:18.046 12:15:11 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:18.046 12:15:11 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:18.046 12:15:11 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:18.046 12:15:11 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:18.046 12:15:11 -- dd/common.sh@98 -- # xtrace_disable 00:18:18.046 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:18:18.046 12:15:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:18.046 12:15:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:18.046 [2024-04-26 12:15:11.329518] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:18.046 [2024-04-26 12:15:11.329622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63178 ] 00:18:18.046 [2024-04-26 12:15:11.460678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.307 [2024-04-26 12:15:11.578412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.566  Copying: 512/512 [B] (average 500 kBps) 00:18:18.566 00:18:18.566 12:15:11 -- dd/posix.sh@93 -- # [[ aw2t0ejv7j5pm4m5u1n8thz08v5ks7ewn7eo3joaqe66sjwmdbiqom4wedg5jwtxxj4muiy4yoze5wkrxbnfcwylo6xs67umhje8b9zytbxhmmhbkntgx3pvr1d97kwivzv6aov1hvxcrtfb81betmb6mizeu37nzv7k01iuxgjgroqftjhhfemo3xe26wjymujvef21drtvaa3heuu8m1u24s096aka7zibieff6qs3404f5astmv95ah7kii5lc12vqx9p9noawu94lrkyhuhhkxiyd3gh64adgi8942erru2xyskriuol5oyt52boijokl38jeoppdewiaj5lmexmz6imifxzotrheq533praqeciusu7uho10ppemlkadnd1leqvqmlbc0m7dzhdgeix5dmitjyn8lqd1damjpvzma5emn6kfbxduv1kamxnjig47jokg35je48nvgqmnnnw2wi1nqp9mkis2cyr5iss80rfl49rqkamtpc03cxe == \a\w\2\t\0\e\j\v\7\j\5\p\m\4\m\5\u\1\n\8\t\h\z\0\8\v\5\k\s\7\e\w\n\7\e\o\3\j\o\a\q\e\6\6\s\j\w\m\d\b\i\q\o\m\4\w\e\d\g\5\j\w\t\x\x\j\4\m\u\i\y\4\y\o\z\e\5\w\k\r\x\b\n\f\c\w\y\l\o\6\x\s\6\7\u\m\h\j\e\8\b\9\z\y\t\b\x\h\m\m\h\b\k\n\t\g\x\3\p\v\r\1\d\9\7\k\w\i\v\z\v\6\a\o\v\1\h\v\x\c\r\t\f\b\8\1\b\e\t\m\b\6\m\i\z\e\u\3\7\n\z\v\7\k\0\1\i\u\x\g\j\g\r\o\q\f\t\j\h\h\f\e\m\o\3\x\e\2\6\w\j\y\m\u\j\v\e\f\2\1\d\r\t\v\a\a\3\h\e\u\u\8\m\1\u\2\4\s\0\9\6\a\k\a\7\z\i\b\i\e\f\f\6\q\s\3\4\0\4\f\5\a\s\t\m\v\9\5\a\h\7\k\i\i\5\l\c\1\2\v\q\x\9\p\9\n\o\a\w\u\9\4\l\r\k\y\h\u\h\h\k\x\i\y\d\3\g\h\6\4\a\d\g\i\8\9\4\2\e\r\r\u\2\x\y\s\k\r\i\u\o\l\5\o\y\t\5\2\b\o\i\j\o\k\l\3\8\j\e\o\p\p\d\e\w\i\a\j\5\l\m\e\x\m\z\6\i\m\i\f\x\z\o\t\r\h\e\q\5\3\3\p\r\a\q\e\c\i\u\s\u\7\u\h\o\1\0\p\p\e\m\l\k\a\d\n\d\1\l\e\q\v\q\m\l\b\c\0\m\7\d\z\h\d\g\e\i\x\5\d\m\i\t\j\y\n\8\l\q\d\1\d\a\m\j\p\v\z\m\a\5\e\m\n\6\k\f\b\x\d\u\v\1\k\a\m\x\n\j\i\g\4\7\j\o\k\g\3\5\j\e\4\8\n\v\g\q\m\n\n\n\w\2\w\i\1\n\q\p\9\m\k\i\s\2\c\y\r\5\i\s\s\8\0\r\f\l\4\9\r\q\k\a\m\t\p\c\0\3\c\x\e ]] 00:18:18.566 12:15:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:18.566 12:15:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:18.566 [2024-04-26 12:15:11.958570] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:18.566 [2024-04-26 12:15:11.958686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63183 ] 00:18:18.825 [2024-04-26 12:15:12.096991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.825 [2024-04-26 12:15:12.209393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.083  Copying: 512/512 [B] (average 500 kBps) 00:18:19.083 00:18:19.083 12:15:12 -- dd/posix.sh@93 -- # [[ aw2t0ejv7j5pm4m5u1n8thz08v5ks7ewn7eo3joaqe66sjwmdbiqom4wedg5jwtxxj4muiy4yoze5wkrxbnfcwylo6xs67umhje8b9zytbxhmmhbkntgx3pvr1d97kwivzv6aov1hvxcrtfb81betmb6mizeu37nzv7k01iuxgjgroqftjhhfemo3xe26wjymujvef21drtvaa3heuu8m1u24s096aka7zibieff6qs3404f5astmv95ah7kii5lc12vqx9p9noawu94lrkyhuhhkxiyd3gh64adgi8942erru2xyskriuol5oyt52boijokl38jeoppdewiaj5lmexmz6imifxzotrheq533praqeciusu7uho10ppemlkadnd1leqvqmlbc0m7dzhdgeix5dmitjyn8lqd1damjpvzma5emn6kfbxduv1kamxnjig47jokg35je48nvgqmnnnw2wi1nqp9mkis2cyr5iss80rfl49rqkamtpc03cxe == \a\w\2\t\0\e\j\v\7\j\5\p\m\4\m\5\u\1\n\8\t\h\z\0\8\v\5\k\s\7\e\w\n\7\e\o\3\j\o\a\q\e\6\6\s\j\w\m\d\b\i\q\o\m\4\w\e\d\g\5\j\w\t\x\x\j\4\m\u\i\y\4\y\o\z\e\5\w\k\r\x\b\n\f\c\w\y\l\o\6\x\s\6\7\u\m\h\j\e\8\b\9\z\y\t\b\x\h\m\m\h\b\k\n\t\g\x\3\p\v\r\1\d\9\7\k\w\i\v\z\v\6\a\o\v\1\h\v\x\c\r\t\f\b\8\1\b\e\t\m\b\6\m\i\z\e\u\3\7\n\z\v\7\k\0\1\i\u\x\g\j\g\r\o\q\f\t\j\h\h\f\e\m\o\3\x\e\2\6\w\j\y\m\u\j\v\e\f\2\1\d\r\t\v\a\a\3\h\e\u\u\8\m\1\u\2\4\s\0\9\6\a\k\a\7\z\i\b\i\e\f\f\6\q\s\3\4\0\4\f\5\a\s\t\m\v\9\5\a\h\7\k\i\i\5\l\c\1\2\v\q\x\9\p\9\n\o\a\w\u\9\4\l\r\k\y\h\u\h\h\k\x\i\y\d\3\g\h\6\4\a\d\g\i\8\9\4\2\e\r\r\u\2\x\y\s\k\r\i\u\o\l\5\o\y\t\5\2\b\o\i\j\o\k\l\3\8\j\e\o\p\p\d\e\w\i\a\j\5\l\m\e\x\m\z\6\i\m\i\f\x\z\o\t\r\h\e\q\5\3\3\p\r\a\q\e\c\i\u\s\u\7\u\h\o\1\0\p\p\e\m\l\k\a\d\n\d\1\l\e\q\v\q\m\l\b\c\0\m\7\d\z\h\d\g\e\i\x\5\d\m\i\t\j\y\n\8\l\q\d\1\d\a\m\j\p\v\z\m\a\5\e\m\n\6\k\f\b\x\d\u\v\1\k\a\m\x\n\j\i\g\4\7\j\o\k\g\3\5\j\e\4\8\n\v\g\q\m\n\n\n\w\2\w\i\1\n\q\p\9\m\k\i\s\2\c\y\r\5\i\s\s\8\0\r\f\l\4\9\r\q\k\a\m\t\p\c\0\3\c\x\e ]] 00:18:19.083 12:15:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:19.083 12:15:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:19.342 [2024-04-26 12:15:12.582812] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:19.342 [2024-04-26 12:15:12.582922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63197 ] 00:18:19.342 [2024-04-26 12:15:12.720560] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.600 [2024-04-26 12:15:12.825071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.859  Copying: 512/512 [B] (average 250 kBps) 00:18:19.859 00:18:19.859 12:15:13 -- dd/posix.sh@93 -- # [[ aw2t0ejv7j5pm4m5u1n8thz08v5ks7ewn7eo3joaqe66sjwmdbiqom4wedg5jwtxxj4muiy4yoze5wkrxbnfcwylo6xs67umhje8b9zytbxhmmhbkntgx3pvr1d97kwivzv6aov1hvxcrtfb81betmb6mizeu37nzv7k01iuxgjgroqftjhhfemo3xe26wjymujvef21drtvaa3heuu8m1u24s096aka7zibieff6qs3404f5astmv95ah7kii5lc12vqx9p9noawu94lrkyhuhhkxiyd3gh64adgi8942erru2xyskriuol5oyt52boijokl38jeoppdewiaj5lmexmz6imifxzotrheq533praqeciusu7uho10ppemlkadnd1leqvqmlbc0m7dzhdgeix5dmitjyn8lqd1damjpvzma5emn6kfbxduv1kamxnjig47jokg35je48nvgqmnnnw2wi1nqp9mkis2cyr5iss80rfl49rqkamtpc03cxe == \a\w\2\t\0\e\j\v\7\j\5\p\m\4\m\5\u\1\n\8\t\h\z\0\8\v\5\k\s\7\e\w\n\7\e\o\3\j\o\a\q\e\6\6\s\j\w\m\d\b\i\q\o\m\4\w\e\d\g\5\j\w\t\x\x\j\4\m\u\i\y\4\y\o\z\e\5\w\k\r\x\b\n\f\c\w\y\l\o\6\x\s\6\7\u\m\h\j\e\8\b\9\z\y\t\b\x\h\m\m\h\b\k\n\t\g\x\3\p\v\r\1\d\9\7\k\w\i\v\z\v\6\a\o\v\1\h\v\x\c\r\t\f\b\8\1\b\e\t\m\b\6\m\i\z\e\u\3\7\n\z\v\7\k\0\1\i\u\x\g\j\g\r\o\q\f\t\j\h\h\f\e\m\o\3\x\e\2\6\w\j\y\m\u\j\v\e\f\2\1\d\r\t\v\a\a\3\h\e\u\u\8\m\1\u\2\4\s\0\9\6\a\k\a\7\z\i\b\i\e\f\f\6\q\s\3\4\0\4\f\5\a\s\t\m\v\9\5\a\h\7\k\i\i\5\l\c\1\2\v\q\x\9\p\9\n\o\a\w\u\9\4\l\r\k\y\h\u\h\h\k\x\i\y\d\3\g\h\6\4\a\d\g\i\8\9\4\2\e\r\r\u\2\x\y\s\k\r\i\u\o\l\5\o\y\t\5\2\b\o\i\j\o\k\l\3\8\j\e\o\p\p\d\e\w\i\a\j\5\l\m\e\x\m\z\6\i\m\i\f\x\z\o\t\r\h\e\q\5\3\3\p\r\a\q\e\c\i\u\s\u\7\u\h\o\1\0\p\p\e\m\l\k\a\d\n\d\1\l\e\q\v\q\m\l\b\c\0\m\7\d\z\h\d\g\e\i\x\5\d\m\i\t\j\y\n\8\l\q\d\1\d\a\m\j\p\v\z\m\a\5\e\m\n\6\k\f\b\x\d\u\v\1\k\a\m\x\n\j\i\g\4\7\j\o\k\g\3\5\j\e\4\8\n\v\g\q\m\n\n\n\w\2\w\i\1\n\q\p\9\m\k\i\s\2\c\y\r\5\i\s\s\8\0\r\f\l\4\9\r\q\k\a\m\t\p\c\0\3\c\x\e ]] 00:18:19.859 12:15:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:19.859 12:15:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:19.859 [2024-04-26 12:15:13.211688] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:19.859 [2024-04-26 12:15:13.211801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63207 ] 00:18:20.117 [2024-04-26 12:15:13.349574] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.117 [2024-04-26 12:15:13.466879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.375  Copying: 512/512 [B] (average 166 kBps) 00:18:20.375 00:18:20.376 12:15:13 -- dd/posix.sh@93 -- # [[ aw2t0ejv7j5pm4m5u1n8thz08v5ks7ewn7eo3joaqe66sjwmdbiqom4wedg5jwtxxj4muiy4yoze5wkrxbnfcwylo6xs67umhje8b9zytbxhmmhbkntgx3pvr1d97kwivzv6aov1hvxcrtfb81betmb6mizeu37nzv7k01iuxgjgroqftjhhfemo3xe26wjymujvef21drtvaa3heuu8m1u24s096aka7zibieff6qs3404f5astmv95ah7kii5lc12vqx9p9noawu94lrkyhuhhkxiyd3gh64adgi8942erru2xyskriuol5oyt52boijokl38jeoppdewiaj5lmexmz6imifxzotrheq533praqeciusu7uho10ppemlkadnd1leqvqmlbc0m7dzhdgeix5dmitjyn8lqd1damjpvzma5emn6kfbxduv1kamxnjig47jokg35je48nvgqmnnnw2wi1nqp9mkis2cyr5iss80rfl49rqkamtpc03cxe == \a\w\2\t\0\e\j\v\7\j\5\p\m\4\m\5\u\1\n\8\t\h\z\0\8\v\5\k\s\7\e\w\n\7\e\o\3\j\o\a\q\e\6\6\s\j\w\m\d\b\i\q\o\m\4\w\e\d\g\5\j\w\t\x\x\j\4\m\u\i\y\4\y\o\z\e\5\w\k\r\x\b\n\f\c\w\y\l\o\6\x\s\6\7\u\m\h\j\e\8\b\9\z\y\t\b\x\h\m\m\h\b\k\n\t\g\x\3\p\v\r\1\d\9\7\k\w\i\v\z\v\6\a\o\v\1\h\v\x\c\r\t\f\b\8\1\b\e\t\m\b\6\m\i\z\e\u\3\7\n\z\v\7\k\0\1\i\u\x\g\j\g\r\o\q\f\t\j\h\h\f\e\m\o\3\x\e\2\6\w\j\y\m\u\j\v\e\f\2\1\d\r\t\v\a\a\3\h\e\u\u\8\m\1\u\2\4\s\0\9\6\a\k\a\7\z\i\b\i\e\f\f\6\q\s\3\4\0\4\f\5\a\s\t\m\v\9\5\a\h\7\k\i\i\5\l\c\1\2\v\q\x\9\p\9\n\o\a\w\u\9\4\l\r\k\y\h\u\h\h\k\x\i\y\d\3\g\h\6\4\a\d\g\i\8\9\4\2\e\r\r\u\2\x\y\s\k\r\i\u\o\l\5\o\y\t\5\2\b\o\i\j\o\k\l\3\8\j\e\o\p\p\d\e\w\i\a\j\5\l\m\e\x\m\z\6\i\m\i\f\x\z\o\t\r\h\e\q\5\3\3\p\r\a\q\e\c\i\u\s\u\7\u\h\o\1\0\p\p\e\m\l\k\a\d\n\d\1\l\e\q\v\q\m\l\b\c\0\m\7\d\z\h\d\g\e\i\x\5\d\m\i\t\j\y\n\8\l\q\d\1\d\a\m\j\p\v\z\m\a\5\e\m\n\6\k\f\b\x\d\u\v\1\k\a\m\x\n\j\i\g\4\7\j\o\k\g\3\5\j\e\4\8\n\v\g\q\m\n\n\n\w\2\w\i\1\n\q\p\9\m\k\i\s\2\c\y\r\5\i\s\s\8\0\r\f\l\4\9\r\q\k\a\m\t\p\c\0\3\c\x\e ]] 00:18:20.376 12:15:13 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:20.376 12:15:13 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:20.376 12:15:13 -- dd/common.sh@98 -- # xtrace_disable 00:18:20.376 12:15:13 -- common/autotest_common.sh@10 -- # set +x 00:18:20.376 12:15:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:20.376 12:15:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:20.632 [2024-04-26 12:15:13.873696] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:20.632 [2024-04-26 12:15:13.873831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63216 ] 00:18:20.633 [2024-04-26 12:15:14.012094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.889 [2024-04-26 12:15:14.125987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.148  Copying: 512/512 [B] (average 500 kBps) 00:18:21.148 00:18:21.148 12:15:14 -- dd/posix.sh@93 -- # [[ xlvcyv8fct91mvlor57dqzby7t6skaijz5ft1o5n3s5f8si3y00ofyg16q2rmcdef99uawxzumziojei0kz26ezdpc7zec7gxumxn0r5mqh16i6kls7ewpm44sw9udlybhsu45t3s1ca9mbn1bfrnlrcafjaza8tlm1w1x6jag76u7xox2urgcjappuj3ql1wble69sat9d9pqlhkwkdt34md4wlg7p4vaeme5083ooj6dniqttgxl45tco1i03anbd9rkytrf9n0ehwnddpzalrik8dwjdphineo5azcl4jcwwss26a30rbnpvv7za031o7qhrfjmbri5fv8l1rvm31odsnbhhj3bz0bzr5dmkyeuvc004o3gjmdssalw8q2nzgs6dg0rxplod35981slslg8xzo7uz03wevyujdxnu4ma59a5yvwhx3o1yzycbcx153g40i50a6srzcanzuyg8bej5i5uvvkk4zgwb891atjj8q9j9h9b65gyvc1ca == \x\l\v\c\y\v\8\f\c\t\9\1\m\v\l\o\r\5\7\d\q\z\b\y\7\t\6\s\k\a\i\j\z\5\f\t\1\o\5\n\3\s\5\f\8\s\i\3\y\0\0\o\f\y\g\1\6\q\2\r\m\c\d\e\f\9\9\u\a\w\x\z\u\m\z\i\o\j\e\i\0\k\z\2\6\e\z\d\p\c\7\z\e\c\7\g\x\u\m\x\n\0\r\5\m\q\h\1\6\i\6\k\l\s\7\e\w\p\m\4\4\s\w\9\u\d\l\y\b\h\s\u\4\5\t\3\s\1\c\a\9\m\b\n\1\b\f\r\n\l\r\c\a\f\j\a\z\a\8\t\l\m\1\w\1\x\6\j\a\g\7\6\u\7\x\o\x\2\u\r\g\c\j\a\p\p\u\j\3\q\l\1\w\b\l\e\6\9\s\a\t\9\d\9\p\q\l\h\k\w\k\d\t\3\4\m\d\4\w\l\g\7\p\4\v\a\e\m\e\5\0\8\3\o\o\j\6\d\n\i\q\t\t\g\x\l\4\5\t\c\o\1\i\0\3\a\n\b\d\9\r\k\y\t\r\f\9\n\0\e\h\w\n\d\d\p\z\a\l\r\i\k\8\d\w\j\d\p\h\i\n\e\o\5\a\z\c\l\4\j\c\w\w\s\s\2\6\a\3\0\r\b\n\p\v\v\7\z\a\0\3\1\o\7\q\h\r\f\j\m\b\r\i\5\f\v\8\l\1\r\v\m\3\1\o\d\s\n\b\h\h\j\3\b\z\0\b\z\r\5\d\m\k\y\e\u\v\c\0\0\4\o\3\g\j\m\d\s\s\a\l\w\8\q\2\n\z\g\s\6\d\g\0\r\x\p\l\o\d\3\5\9\8\1\s\l\s\l\g\8\x\z\o\7\u\z\0\3\w\e\v\y\u\j\d\x\n\u\4\m\a\5\9\a\5\y\v\w\h\x\3\o\1\y\z\y\c\b\c\x\1\5\3\g\4\0\i\5\0\a\6\s\r\z\c\a\n\z\u\y\g\8\b\e\j\5\i\5\u\v\v\k\k\4\z\g\w\b\8\9\1\a\t\j\j\8\q\9\j\9\h\9\b\6\5\g\y\v\c\1\c\a ]] 00:18:21.148 12:15:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:21.148 12:15:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:21.148 [2024-04-26 12:15:14.511944] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:21.148 [2024-04-26 12:15:14.512066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63231 ] 00:18:21.407 [2024-04-26 12:15:14.647787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.407 [2024-04-26 12:15:14.763510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.664  Copying: 512/512 [B] (average 500 kBps) 00:18:21.664 00:18:21.665 12:15:15 -- dd/posix.sh@93 -- # [[ xlvcyv8fct91mvlor57dqzby7t6skaijz5ft1o5n3s5f8si3y00ofyg16q2rmcdef99uawxzumziojei0kz26ezdpc7zec7gxumxn0r5mqh16i6kls7ewpm44sw9udlybhsu45t3s1ca9mbn1bfrnlrcafjaza8tlm1w1x6jag76u7xox2urgcjappuj3ql1wble69sat9d9pqlhkwkdt34md4wlg7p4vaeme5083ooj6dniqttgxl45tco1i03anbd9rkytrf9n0ehwnddpzalrik8dwjdphineo5azcl4jcwwss26a30rbnpvv7za031o7qhrfjmbri5fv8l1rvm31odsnbhhj3bz0bzr5dmkyeuvc004o3gjmdssalw8q2nzgs6dg0rxplod35981slslg8xzo7uz03wevyujdxnu4ma59a5yvwhx3o1yzycbcx153g40i50a6srzcanzuyg8bej5i5uvvkk4zgwb891atjj8q9j9h9b65gyvc1ca == \x\l\v\c\y\v\8\f\c\t\9\1\m\v\l\o\r\5\7\d\q\z\b\y\7\t\6\s\k\a\i\j\z\5\f\t\1\o\5\n\3\s\5\f\8\s\i\3\y\0\0\o\f\y\g\1\6\q\2\r\m\c\d\e\f\9\9\u\a\w\x\z\u\m\z\i\o\j\e\i\0\k\z\2\6\e\z\d\p\c\7\z\e\c\7\g\x\u\m\x\n\0\r\5\m\q\h\1\6\i\6\k\l\s\7\e\w\p\m\4\4\s\w\9\u\d\l\y\b\h\s\u\4\5\t\3\s\1\c\a\9\m\b\n\1\b\f\r\n\l\r\c\a\f\j\a\z\a\8\t\l\m\1\w\1\x\6\j\a\g\7\6\u\7\x\o\x\2\u\r\g\c\j\a\p\p\u\j\3\q\l\1\w\b\l\e\6\9\s\a\t\9\d\9\p\q\l\h\k\w\k\d\t\3\4\m\d\4\w\l\g\7\p\4\v\a\e\m\e\5\0\8\3\o\o\j\6\d\n\i\q\t\t\g\x\l\4\5\t\c\o\1\i\0\3\a\n\b\d\9\r\k\y\t\r\f\9\n\0\e\h\w\n\d\d\p\z\a\l\r\i\k\8\d\w\j\d\p\h\i\n\e\o\5\a\z\c\l\4\j\c\w\w\s\s\2\6\a\3\0\r\b\n\p\v\v\7\z\a\0\3\1\o\7\q\h\r\f\j\m\b\r\i\5\f\v\8\l\1\r\v\m\3\1\o\d\s\n\b\h\h\j\3\b\z\0\b\z\r\5\d\m\k\y\e\u\v\c\0\0\4\o\3\g\j\m\d\s\s\a\l\w\8\q\2\n\z\g\s\6\d\g\0\r\x\p\l\o\d\3\5\9\8\1\s\l\s\l\g\8\x\z\o\7\u\z\0\3\w\e\v\y\u\j\d\x\n\u\4\m\a\5\9\a\5\y\v\w\h\x\3\o\1\y\z\y\c\b\c\x\1\5\3\g\4\0\i\5\0\a\6\s\r\z\c\a\n\z\u\y\g\8\b\e\j\5\i\5\u\v\v\k\k\4\z\g\w\b\8\9\1\a\t\j\j\8\q\9\j\9\h\9\b\6\5\g\y\v\c\1\c\a ]] 00:18:21.665 12:15:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:21.665 12:15:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:21.923 [2024-04-26 12:15:15.152687] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:21.923 [2024-04-26 12:15:15.152816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63241 ] 00:18:21.923 [2024-04-26 12:15:15.292503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.182 [2024-04-26 12:15:15.404786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.441  Copying: 512/512 [B] (average 125 kBps) 00:18:22.441 00:18:22.441 12:15:15 -- dd/posix.sh@93 -- # [[ xlvcyv8fct91mvlor57dqzby7t6skaijz5ft1o5n3s5f8si3y00ofyg16q2rmcdef99uawxzumziojei0kz26ezdpc7zec7gxumxn0r5mqh16i6kls7ewpm44sw9udlybhsu45t3s1ca9mbn1bfrnlrcafjaza8tlm1w1x6jag76u7xox2urgcjappuj3ql1wble69sat9d9pqlhkwkdt34md4wlg7p4vaeme5083ooj6dniqttgxl45tco1i03anbd9rkytrf9n0ehwnddpzalrik8dwjdphineo5azcl4jcwwss26a30rbnpvv7za031o7qhrfjmbri5fv8l1rvm31odsnbhhj3bz0bzr5dmkyeuvc004o3gjmdssalw8q2nzgs6dg0rxplod35981slslg8xzo7uz03wevyujdxnu4ma59a5yvwhx3o1yzycbcx153g40i50a6srzcanzuyg8bej5i5uvvkk4zgwb891atjj8q9j9h9b65gyvc1ca == \x\l\v\c\y\v\8\f\c\t\9\1\m\v\l\o\r\5\7\d\q\z\b\y\7\t\6\s\k\a\i\j\z\5\f\t\1\o\5\n\3\s\5\f\8\s\i\3\y\0\0\o\f\y\g\1\6\q\2\r\m\c\d\e\f\9\9\u\a\w\x\z\u\m\z\i\o\j\e\i\0\k\z\2\6\e\z\d\p\c\7\z\e\c\7\g\x\u\m\x\n\0\r\5\m\q\h\1\6\i\6\k\l\s\7\e\w\p\m\4\4\s\w\9\u\d\l\y\b\h\s\u\4\5\t\3\s\1\c\a\9\m\b\n\1\b\f\r\n\l\r\c\a\f\j\a\z\a\8\t\l\m\1\w\1\x\6\j\a\g\7\6\u\7\x\o\x\2\u\r\g\c\j\a\p\p\u\j\3\q\l\1\w\b\l\e\6\9\s\a\t\9\d\9\p\q\l\h\k\w\k\d\t\3\4\m\d\4\w\l\g\7\p\4\v\a\e\m\e\5\0\8\3\o\o\j\6\d\n\i\q\t\t\g\x\l\4\5\t\c\o\1\i\0\3\a\n\b\d\9\r\k\y\t\r\f\9\n\0\e\h\w\n\d\d\p\z\a\l\r\i\k\8\d\w\j\d\p\h\i\n\e\o\5\a\z\c\l\4\j\c\w\w\s\s\2\6\a\3\0\r\b\n\p\v\v\7\z\a\0\3\1\o\7\q\h\r\f\j\m\b\r\i\5\f\v\8\l\1\r\v\m\3\1\o\d\s\n\b\h\h\j\3\b\z\0\b\z\r\5\d\m\k\y\e\u\v\c\0\0\4\o\3\g\j\m\d\s\s\a\l\w\8\q\2\n\z\g\s\6\d\g\0\r\x\p\l\o\d\3\5\9\8\1\s\l\s\l\g\8\x\z\o\7\u\z\0\3\w\e\v\y\u\j\d\x\n\u\4\m\a\5\9\a\5\y\v\w\h\x\3\o\1\y\z\y\c\b\c\x\1\5\3\g\4\0\i\5\0\a\6\s\r\z\c\a\n\z\u\y\g\8\b\e\j\5\i\5\u\v\v\k\k\4\z\g\w\b\8\9\1\a\t\j\j\8\q\9\j\9\h\9\b\6\5\g\y\v\c\1\c\a ]] 00:18:22.441 12:15:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:22.441 12:15:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:22.441 [2024-04-26 12:15:15.787454] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:22.441 [2024-04-26 12:15:15.787585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:18:22.711 [2024-04-26 12:15:15.926641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.711 [2024-04-26 12:15:16.040657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.971  Copying: 512/512 [B] (average 166 kBps) 00:18:22.971 00:18:22.971 12:15:16 -- dd/posix.sh@93 -- # [[ xlvcyv8fct91mvlor57dqzby7t6skaijz5ft1o5n3s5f8si3y00ofyg16q2rmcdef99uawxzumziojei0kz26ezdpc7zec7gxumxn0r5mqh16i6kls7ewpm44sw9udlybhsu45t3s1ca9mbn1bfrnlrcafjaza8tlm1w1x6jag76u7xox2urgcjappuj3ql1wble69sat9d9pqlhkwkdt34md4wlg7p4vaeme5083ooj6dniqttgxl45tco1i03anbd9rkytrf9n0ehwnddpzalrik8dwjdphineo5azcl4jcwwss26a30rbnpvv7za031o7qhrfjmbri5fv8l1rvm31odsnbhhj3bz0bzr5dmkyeuvc004o3gjmdssalw8q2nzgs6dg0rxplod35981slslg8xzo7uz03wevyujdxnu4ma59a5yvwhx3o1yzycbcx153g40i50a6srzcanzuyg8bej5i5uvvkk4zgwb891atjj8q9j9h9b65gyvc1ca == \x\l\v\c\y\v\8\f\c\t\9\1\m\v\l\o\r\5\7\d\q\z\b\y\7\t\6\s\k\a\i\j\z\5\f\t\1\o\5\n\3\s\5\f\8\s\i\3\y\0\0\o\f\y\g\1\6\q\2\r\m\c\d\e\f\9\9\u\a\w\x\z\u\m\z\i\o\j\e\i\0\k\z\2\6\e\z\d\p\c\7\z\e\c\7\g\x\u\m\x\n\0\r\5\m\q\h\1\6\i\6\k\l\s\7\e\w\p\m\4\4\s\w\9\u\d\l\y\b\h\s\u\4\5\t\3\s\1\c\a\9\m\b\n\1\b\f\r\n\l\r\c\a\f\j\a\z\a\8\t\l\m\1\w\1\x\6\j\a\g\7\6\u\7\x\o\x\2\u\r\g\c\j\a\p\p\u\j\3\q\l\1\w\b\l\e\6\9\s\a\t\9\d\9\p\q\l\h\k\w\k\d\t\3\4\m\d\4\w\l\g\7\p\4\v\a\e\m\e\5\0\8\3\o\o\j\6\d\n\i\q\t\t\g\x\l\4\5\t\c\o\1\i\0\3\a\n\b\d\9\r\k\y\t\r\f\9\n\0\e\h\w\n\d\d\p\z\a\l\r\i\k\8\d\w\j\d\p\h\i\n\e\o\5\a\z\c\l\4\j\c\w\w\s\s\2\6\a\3\0\r\b\n\p\v\v\7\z\a\0\3\1\o\7\q\h\r\f\j\m\b\r\i\5\f\v\8\l\1\r\v\m\3\1\o\d\s\n\b\h\h\j\3\b\z\0\b\z\r\5\d\m\k\y\e\u\v\c\0\0\4\o\3\g\j\m\d\s\s\a\l\w\8\q\2\n\z\g\s\6\d\g\0\r\x\p\l\o\d\3\5\9\8\1\s\l\s\l\g\8\x\z\o\7\u\z\0\3\w\e\v\y\u\j\d\x\n\u\4\m\a\5\9\a\5\y\v\w\h\x\3\o\1\y\z\y\c\b\c\x\1\5\3\g\4\0\i\5\0\a\6\s\r\z\c\a\n\z\u\y\g\8\b\e\j\5\i\5\u\v\v\k\k\4\z\g\w\b\8\9\1\a\t\j\j\8\q\9\j\9\h\9\b\6\5\g\y\v\c\1\c\a ]] 00:18:22.971 00:18:22.971 real 0m5.098s 00:18:22.971 user 0m3.065s 00:18:22.972 sys 0m2.214s 00:18:22.972 12:15:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:22.972 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:18:22.972 ************************************ 00:18:22.972 END TEST dd_flags_misc 00:18:22.972 ************************************ 00:18:22.972 12:15:16 -- dd/posix.sh@131 -- # tests_forced_aio 00:18:22.972 12:15:16 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:18:22.972 * Second test run, disabling liburing, forcing AIO 00:18:22.972 12:15:16 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:18:22.972 12:15:16 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:18:22.972 12:15:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:22.972 12:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.972 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:18:23.230 ************************************ 00:18:23.230 START TEST dd_flag_append_forced_aio 00:18:23.230 ************************************ 00:18:23.230 12:15:16 -- common/autotest_common.sh@1111 -- # append 00:18:23.230 12:15:16 -- dd/posix.sh@16 -- # local dump0 00:18:23.230 12:15:16 -- dd/posix.sh@17 -- # local dump1 00:18:23.230 12:15:16 -- dd/posix.sh@19 -- # gen_bytes 32 00:18:23.230 12:15:16 -- dd/common.sh@98 -- # xtrace_disable 00:18:23.230 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:18:23.230 12:15:16 -- dd/posix.sh@19 -- # dump0=mcm6n21gjbbguz97x4mmp01pbire65v2 00:18:23.230 12:15:16 -- dd/posix.sh@20 -- # gen_bytes 32 00:18:23.230 12:15:16 -- dd/common.sh@98 -- # xtrace_disable 00:18:23.230 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:18:23.230 12:15:16 -- dd/posix.sh@20 -- # dump1=r5kw35jv6n8u3j6cyqn3j7q5ficmnytt 00:18:23.230 12:15:16 -- dd/posix.sh@22 -- # printf %s mcm6n21gjbbguz97x4mmp01pbire65v2 00:18:23.230 12:15:16 -- dd/posix.sh@23 -- # printf %s r5kw35jv6n8u3j6cyqn3j7q5ficmnytt 00:18:23.230 12:15:16 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:23.230 [2024-04-26 12:15:16.544515] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:23.230 [2024-04-26 12:15:16.544638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63288 ] 00:18:23.230 [2024-04-26 12:15:16.685320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.488 [2024-04-26 12:15:16.800019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.746  Copying: 32/32 [B] (average 31 kBps) 00:18:23.746 00:18:23.746 12:15:17 -- dd/posix.sh@27 -- # [[ r5kw35jv6n8u3j6cyqn3j7q5ficmnyttmcm6n21gjbbguz97x4mmp01pbire65v2 == \r\5\k\w\3\5\j\v\6\n\8\u\3\j\6\c\y\q\n\3\j\7\q\5\f\i\c\m\n\y\t\t\m\c\m\6\n\2\1\g\j\b\b\g\u\z\9\7\x\4\m\m\p\0\1\p\b\i\r\e\6\5\v\2 ]] 00:18:23.746 00:18:23.746 real 0m0.688s 00:18:23.746 user 0m0.400s 00:18:23.746 sys 0m0.156s 00:18:23.746 12:15:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:23.746 ************************************ 00:18:23.746 END TEST dd_flag_append_forced_aio 00:18:23.746 ************************************ 00:18:23.746 12:15:17 -- common/autotest_common.sh@10 -- # set +x 00:18:24.004 12:15:17 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:18:24.004 12:15:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:24.004 12:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:24.004 12:15:17 -- common/autotest_common.sh@10 -- # set +x 00:18:24.004 ************************************ 00:18:24.004 START TEST dd_flag_directory_forced_aio 00:18:24.004 ************************************ 00:18:24.004 12:15:17 -- common/autotest_common.sh@1111 -- # directory 00:18:24.004 12:15:17 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:24.004 12:15:17 -- common/autotest_common.sh@638 -- # local es=0 00:18:24.005 12:15:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:24.005 12:15:17 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.005 12:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.005 12:15:17 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.005 12:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.005 12:15:17 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.005 12:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.005 12:15:17 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.005 12:15:17 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:24.005 12:15:17 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:24.005 [2024-04-26 12:15:17.345166] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:24.005 [2024-04-26 12:15:17.345305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63319 ] 00:18:24.262 [2024-04-26 12:15:17.485294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.262 [2024-04-26 12:15:17.613964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.262 [2024-04-26 12:15:17.707168] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:24.262 [2024-04-26 12:15:17.707272] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:24.262 [2024-04-26 12:15:17.707293] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:24.520 [2024-04-26 12:15:17.823080] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:24.521 12:15:17 -- common/autotest_common.sh@641 -- # es=236 00:18:24.521 12:15:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:24.521 12:15:17 -- common/autotest_common.sh@650 -- # es=108 00:18:24.521 12:15:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:24.521 12:15:17 -- common/autotest_common.sh@658 -- # es=1 00:18:24.521 12:15:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:24.521 12:15:17 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:24.521 12:15:17 -- common/autotest_common.sh@638 -- # local es=0 00:18:24.521 12:15:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:24.521 12:15:17 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.521 12:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.521 12:15:17 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.521 12:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.521 12:15:17 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.521 12:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.521 12:15:17 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.521 12:15:17 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:24.521 12:15:17 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:24.778 [2024-04-26 12:15:18.007196] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:24.778 [2024-04-26 12:15:18.007320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63328 ] 00:18:24.778 [2024-04-26 12:15:18.147952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.036 [2024-04-26 12:15:18.285089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.036 [2024-04-26 12:15:18.390861] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:25.036 [2024-04-26 12:15:18.390942] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:25.036 [2024-04-26 12:15:18.390963] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:25.295 [2024-04-26 12:15:18.507444] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:25.295 12:15:18 -- common/autotest_common.sh@641 -- # es=236 00:18:25.295 12:15:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:25.295 12:15:18 -- common/autotest_common.sh@650 -- # es=108 00:18:25.295 12:15:18 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:25.295 12:15:18 -- common/autotest_common.sh@658 -- # es=1 00:18:25.295 12:15:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:25.295 00:18:25.295 real 0m1.343s 00:18:25.295 user 0m0.809s 00:18:25.295 sys 0m0.323s 00:18:25.295 12:15:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:25.295 12:15:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.295 ************************************ 00:18:25.295 END TEST dd_flag_directory_forced_aio 00:18:25.295 ************************************ 00:18:25.295 12:15:18 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:18:25.295 12:15:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:25.295 12:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:25.295 12:15:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.295 ************************************ 00:18:25.295 START TEST dd_flag_nofollow_forced_aio 00:18:25.295 ************************************ 00:18:25.295 12:15:18 -- common/autotest_common.sh@1111 -- # nofollow 00:18:25.295 12:15:18 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:25.295 12:15:18 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:25.295 12:15:18 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:25.295 12:15:18 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:25.295 12:15:18 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:25.295 12:15:18 -- common/autotest_common.sh@638 -- # local es=0 00:18:25.295 12:15:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:25.295 12:15:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.295 12:15:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.295 12:15:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.552 12:15:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.552 12:15:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.552 12:15:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.552 12:15:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.552 12:15:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:25.553 12:15:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:25.553 [2024-04-26 12:15:18.805688] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:25.553 [2024-04-26 12:15:18.805787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63366 ] 00:18:25.553 [2024-04-26 12:15:18.942118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.810 [2024-04-26 12:15:19.063996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.810 [2024-04-26 12:15:19.161556] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:25.810 [2024-04-26 12:15:19.161625] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:25.810 [2024-04-26 12:15:19.161650] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:25.810 [2024-04-26 12:15:19.276319] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:26.068 12:15:19 -- common/autotest_common.sh@641 -- # es=216 00:18:26.068 12:15:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:26.068 12:15:19 -- common/autotest_common.sh@650 -- # es=88 00:18:26.068 12:15:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:26.068 12:15:19 -- common/autotest_common.sh@658 -- # es=1 00:18:26.068 12:15:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:26.068 12:15:19 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:26.068 12:15:19 -- common/autotest_common.sh@638 -- # local es=0 00:18:26.068 12:15:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:26.068 12:15:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.068 12:15:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:26.068 12:15:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.068 12:15:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:26.068 12:15:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.068 12:15:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:26.068 12:15:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.068 12:15:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:26.068 12:15:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:26.068 [2024-04-26 12:15:19.463845] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:26.068 [2024-04-26 12:15:19.464155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63376 ] 00:18:26.326 [2024-04-26 12:15:19.605544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.326 [2024-04-26 12:15:19.719771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.584 [2024-04-26 12:15:19.808711] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:26.584 [2024-04-26 12:15:19.808783] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:26.584 [2024-04-26 12:15:19.808804] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:26.584 [2024-04-26 12:15:19.920077] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:18:26.584 12:15:20 -- common/autotest_common.sh@641 -- # es=216 00:18:26.584 12:15:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:26.584 12:15:20 -- common/autotest_common.sh@650 -- # es=88 00:18:26.584 12:15:20 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:26.584 12:15:20 -- common/autotest_common.sh@658 -- # es=1 00:18:26.584 12:15:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:26.584 12:15:20 -- dd/posix.sh@46 -- # gen_bytes 512 00:18:26.584 12:15:20 -- dd/common.sh@98 -- # xtrace_disable 00:18:26.584 12:15:20 -- common/autotest_common.sh@10 -- # set +x 00:18:26.842 12:15:20 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:26.842 [2024-04-26 12:15:20.108983] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:26.842 [2024-04-26 12:15:20.109084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:18:26.842 [2024-04-26 12:15:20.249001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.100 [2024-04-26 12:15:20.366153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.378  Copying: 512/512 [B] (average 500 kBps) 00:18:27.378 00:18:27.378 12:15:20 -- dd/posix.sh@49 -- # [[ kmqpacioq5cl9klaac4yz0d2k0vzaoklrvz38io51tjmwe6z0gkfjb4ijnx8n3e2ulor2y8g5s1j3s9fgh13abvlayqtzhr7xy9gxwaiidd33zl73dvi11kdqaa87k5373la621fom4s8hv71jkx6r567bzdsm3i1ydjln9ss4iy6oukgfetcfp8yynn4tmonhufcxabma9x012va9hrral2djahznvnelvxx213h7siuo81zc5xvgjyrkz2pw7y1aezrs0236viy2fg3e8b99xg4yysgty60jxcboiqhpl661uweova5j83bkif1wsk5e1m20amx0k816dukjyvoszw5ov10s9wscrgi4r9ko0anznk50jxkt4exawzrcagib18pz4ursajukz4e8c3fq7risxmwoa407x7nh75jogaxj9s13kw3kszf6uxsdx53bmqjp893pzwewbjmltupom04fzrz0qq5eazwqi0dj5s4fi63mefjgef0zaqkmf7 == \k\m\q\p\a\c\i\o\q\5\c\l\9\k\l\a\a\c\4\y\z\0\d\2\k\0\v\z\a\o\k\l\r\v\z\3\8\i\o\5\1\t\j\m\w\e\6\z\0\g\k\f\j\b\4\i\j\n\x\8\n\3\e\2\u\l\o\r\2\y\8\g\5\s\1\j\3\s\9\f\g\h\1\3\a\b\v\l\a\y\q\t\z\h\r\7\x\y\9\g\x\w\a\i\i\d\d\3\3\z\l\7\3\d\v\i\1\1\k\d\q\a\a\8\7\k\5\3\7\3\l\a\6\2\1\f\o\m\4\s\8\h\v\7\1\j\k\x\6\r\5\6\7\b\z\d\s\m\3\i\1\y\d\j\l\n\9\s\s\4\i\y\6\o\u\k\g\f\e\t\c\f\p\8\y\y\n\n\4\t\m\o\n\h\u\f\c\x\a\b\m\a\9\x\0\1\2\v\a\9\h\r\r\a\l\2\d\j\a\h\z\n\v\n\e\l\v\x\x\2\1\3\h\7\s\i\u\o\8\1\z\c\5\x\v\g\j\y\r\k\z\2\p\w\7\y\1\a\e\z\r\s\0\2\3\6\v\i\y\2\f\g\3\e\8\b\9\9\x\g\4\y\y\s\g\t\y\6\0\j\x\c\b\o\i\q\h\p\l\6\6\1\u\w\e\o\v\a\5\j\8\3\b\k\i\f\1\w\s\k\5\e\1\m\2\0\a\m\x\0\k\8\1\6\d\u\k\j\y\v\o\s\z\w\5\o\v\1\0\s\9\w\s\c\r\g\i\4\r\9\k\o\0\a\n\z\n\k\5\0\j\x\k\t\4\e\x\a\w\z\r\c\a\g\i\b\1\8\p\z\4\u\r\s\a\j\u\k\z\4\e\8\c\3\f\q\7\r\i\s\x\m\w\o\a\4\0\7\x\7\n\h\7\5\j\o\g\a\x\j\9\s\1\3\k\w\3\k\s\z\f\6\u\x\s\d\x\5\3\b\m\q\j\p\8\9\3\p\z\w\e\w\b\j\m\l\t\u\p\o\m\0\4\f\z\r\z\0\q\q\5\e\a\z\w\q\i\0\d\j\5\s\4\f\i\6\3\m\e\f\j\g\e\f\0\z\a\q\k\m\f\7 ]] 00:18:27.378 00:18:27.378 real 0m1.955s 00:18:27.378 user 0m1.168s 00:18:27.378 sys 0m0.453s 00:18:27.378 12:15:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:27.378 ************************************ 00:18:27.378 END TEST dd_flag_nofollow_forced_aio 00:18:27.378 ************************************ 00:18:27.378 12:15:20 -- common/autotest_common.sh@10 -- # set +x 00:18:27.378 12:15:20 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:18:27.378 12:15:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:27.378 12:15:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:27.378 12:15:20 -- common/autotest_common.sh@10 -- # set +x 00:18:27.378 ************************************ 00:18:27.378 START TEST dd_flag_noatime_forced_aio 00:18:27.378 ************************************ 00:18:27.378 12:15:20 -- common/autotest_common.sh@1111 -- # noatime 00:18:27.378 12:15:20 -- dd/posix.sh@53 -- # local atime_if 00:18:27.378 12:15:20 -- dd/posix.sh@54 -- # local atime_of 00:18:27.378 12:15:20 -- dd/posix.sh@58 -- # gen_bytes 512 00:18:27.378 12:15:20 -- dd/common.sh@98 -- # xtrace_disable 00:18:27.378 12:15:20 -- common/autotest_common.sh@10 -- # set +x 00:18:27.378 12:15:20 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:27.652 12:15:20 -- dd/posix.sh@60 -- # atime_if=1714133720 00:18:27.652 12:15:20 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:27.652 12:15:20 -- dd/posix.sh@61 -- # atime_of=1714133720 00:18:27.652 12:15:20 -- dd/posix.sh@66 -- # sleep 1 00:18:28.587 12:15:21 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:28.587 [2024-04-26 12:15:21.886861] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:28.587 [2024-04-26 12:15:21.886965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:18:28.587 [2024-04-26 12:15:22.021803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.846 [2024-04-26 12:15:22.143449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.120  Copying: 512/512 [B] (average 500 kBps) 00:18:29.120 00:18:29.120 12:15:22 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:29.120 12:15:22 -- dd/posix.sh@69 -- # (( atime_if == 1714133720 )) 00:18:29.120 12:15:22 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:29.120 12:15:22 -- dd/posix.sh@70 -- # (( atime_of == 1714133720 )) 00:18:29.120 12:15:22 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:29.120 [2024-04-26 12:15:22.556744] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:29.120 [2024-04-26 12:15:22.556856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63445 ] 00:18:29.379 [2024-04-26 12:15:22.698153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.379 [2024-04-26 12:15:22.825020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.914  Copying: 512/512 [B] (average 500 kBps) 00:18:29.914 00:18:29.915 12:15:23 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:29.915 ************************************ 00:18:29.915 END TEST dd_flag_noatime_forced_aio 00:18:29.915 ************************************ 00:18:29.915 12:15:23 -- dd/posix.sh@73 -- # (( atime_if < 1714133722 )) 00:18:29.915 00:18:29.915 real 0m2.382s 00:18:29.915 user 0m0.819s 00:18:29.915 sys 0m0.323s 00:18:29.915 12:15:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:29.915 12:15:23 -- common/autotest_common.sh@10 -- # set +x 00:18:29.915 12:15:23 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:18:29.915 12:15:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:29.915 12:15:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.915 12:15:23 -- common/autotest_common.sh@10 -- # set +x 00:18:29.915 ************************************ 00:18:29.915 START TEST dd_flags_misc_forced_aio 00:18:29.915 ************************************ 00:18:29.915 12:15:23 -- common/autotest_common.sh@1111 -- # io 00:18:29.915 12:15:23 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:29.915 12:15:23 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:29.915 12:15:23 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:29.915 12:15:23 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:29.915 12:15:23 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:29.915 12:15:23 -- dd/common.sh@98 -- # xtrace_disable 00:18:29.915 12:15:23 -- common/autotest_common.sh@10 -- # set +x 00:18:29.915 12:15:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:29.915 12:15:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:30.183 [2024-04-26 12:15:23.406798] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:30.183 [2024-04-26 12:15:23.406927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63482 ] 00:18:30.183 [2024-04-26 12:15:23.547249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.452 [2024-04-26 12:15:23.674123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.712  Copying: 512/512 [B] (average 500 kBps) 00:18:30.712 00:18:30.712 12:15:24 -- dd/posix.sh@93 -- # [[ f6ei6as2bvrsqdyocsvonenjjxkxswld2ps41le1pir4vqgc0qvglhgu0ikewx6z9vkr1d70nix15iugomigy6e5jenrrv9uz9oqchyu203385swohur6p56snnomcnqvey5s7y8j6nat5y8oz2rplr18apqcqdlb3b2c3g0xtzks8rlxdvdgaz81aqdutlfili5kun2awa584i59hq3tfawpvdmrap4q62i6xrlg1uzuh5v9rxihbfw4ipks9un90kv2uq0xhrjwi6ejzb0h8qlrju71on9ciuovamm9c0hwfr23c56ssupvjf06dm6tl4zyq7b1q6coaqr6m5l7t7kk47yyaqcv1gb2kfsqljzyh2q0g0acfiqqywu9ihzkn4h1zjij4mqv1ba84cfq76u5knpk6ujp8nrnrpwsgiajg2rd5qzde8q50a71s2gvcg0k3dlni4al9zyujtr9xw9hslyforet5uytyv3zf2i3l1sdnjusamfh5ghp15o == \f\6\e\i\6\a\s\2\b\v\r\s\q\d\y\o\c\s\v\o\n\e\n\j\j\x\k\x\s\w\l\d\2\p\s\4\1\l\e\1\p\i\r\4\v\q\g\c\0\q\v\g\l\h\g\u\0\i\k\e\w\x\6\z\9\v\k\r\1\d\7\0\n\i\x\1\5\i\u\g\o\m\i\g\y\6\e\5\j\e\n\r\r\v\9\u\z\9\o\q\c\h\y\u\2\0\3\3\8\5\s\w\o\h\u\r\6\p\5\6\s\n\n\o\m\c\n\q\v\e\y\5\s\7\y\8\j\6\n\a\t\5\y\8\o\z\2\r\p\l\r\1\8\a\p\q\c\q\d\l\b\3\b\2\c\3\g\0\x\t\z\k\s\8\r\l\x\d\v\d\g\a\z\8\1\a\q\d\u\t\l\f\i\l\i\5\k\u\n\2\a\w\a\5\8\4\i\5\9\h\q\3\t\f\a\w\p\v\d\m\r\a\p\4\q\6\2\i\6\x\r\l\g\1\u\z\u\h\5\v\9\r\x\i\h\b\f\w\4\i\p\k\s\9\u\n\9\0\k\v\2\u\q\0\x\h\r\j\w\i\6\e\j\z\b\0\h\8\q\l\r\j\u\7\1\o\n\9\c\i\u\o\v\a\m\m\9\c\0\h\w\f\r\2\3\c\5\6\s\s\u\p\v\j\f\0\6\d\m\6\t\l\4\z\y\q\7\b\1\q\6\c\o\a\q\r\6\m\5\l\7\t\7\k\k\4\7\y\y\a\q\c\v\1\g\b\2\k\f\s\q\l\j\z\y\h\2\q\0\g\0\a\c\f\i\q\q\y\w\u\9\i\h\z\k\n\4\h\1\z\j\i\j\4\m\q\v\1\b\a\8\4\c\f\q\7\6\u\5\k\n\p\k\6\u\j\p\8\n\r\n\r\p\w\s\g\i\a\j\g\2\r\d\5\q\z\d\e\8\q\5\0\a\7\1\s\2\g\v\c\g\0\k\3\d\l\n\i\4\a\l\9\z\y\u\j\t\r\9\x\w\9\h\s\l\y\f\o\r\e\t\5\u\y\t\y\v\3\z\f\2\i\3\l\1\s\d\n\j\u\s\a\m\f\h\5\g\h\p\1\5\o ]] 00:18:30.712 12:15:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:30.712 12:15:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:30.712 [2024-04-26 12:15:24.086548] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:30.712 [2024-04-26 12:15:24.086693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63489 ] 00:18:30.972 [2024-04-26 12:15:24.225495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.972 [2024-04-26 12:15:24.330213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.231  Copying: 512/512 [B] (average 500 kBps) 00:18:31.231 00:18:31.231 12:15:24 -- dd/posix.sh@93 -- # [[ f6ei6as2bvrsqdyocsvonenjjxkxswld2ps41le1pir4vqgc0qvglhgu0ikewx6z9vkr1d70nix15iugomigy6e5jenrrv9uz9oqchyu203385swohur6p56snnomcnqvey5s7y8j6nat5y8oz2rplr18apqcqdlb3b2c3g0xtzks8rlxdvdgaz81aqdutlfili5kun2awa584i59hq3tfawpvdmrap4q62i6xrlg1uzuh5v9rxihbfw4ipks9un90kv2uq0xhrjwi6ejzb0h8qlrju71on9ciuovamm9c0hwfr23c56ssupvjf06dm6tl4zyq7b1q6coaqr6m5l7t7kk47yyaqcv1gb2kfsqljzyh2q0g0acfiqqywu9ihzkn4h1zjij4mqv1ba84cfq76u5knpk6ujp8nrnrpwsgiajg2rd5qzde8q50a71s2gvcg0k3dlni4al9zyujtr9xw9hslyforet5uytyv3zf2i3l1sdnjusamfh5ghp15o == \f\6\e\i\6\a\s\2\b\v\r\s\q\d\y\o\c\s\v\o\n\e\n\j\j\x\k\x\s\w\l\d\2\p\s\4\1\l\e\1\p\i\r\4\v\q\g\c\0\q\v\g\l\h\g\u\0\i\k\e\w\x\6\z\9\v\k\r\1\d\7\0\n\i\x\1\5\i\u\g\o\m\i\g\y\6\e\5\j\e\n\r\r\v\9\u\z\9\o\q\c\h\y\u\2\0\3\3\8\5\s\w\o\h\u\r\6\p\5\6\s\n\n\o\m\c\n\q\v\e\y\5\s\7\y\8\j\6\n\a\t\5\y\8\o\z\2\r\p\l\r\1\8\a\p\q\c\q\d\l\b\3\b\2\c\3\g\0\x\t\z\k\s\8\r\l\x\d\v\d\g\a\z\8\1\a\q\d\u\t\l\f\i\l\i\5\k\u\n\2\a\w\a\5\8\4\i\5\9\h\q\3\t\f\a\w\p\v\d\m\r\a\p\4\q\6\2\i\6\x\r\l\g\1\u\z\u\h\5\v\9\r\x\i\h\b\f\w\4\i\p\k\s\9\u\n\9\0\k\v\2\u\q\0\x\h\r\j\w\i\6\e\j\z\b\0\h\8\q\l\r\j\u\7\1\o\n\9\c\i\u\o\v\a\m\m\9\c\0\h\w\f\r\2\3\c\5\6\s\s\u\p\v\j\f\0\6\d\m\6\t\l\4\z\y\q\7\b\1\q\6\c\o\a\q\r\6\m\5\l\7\t\7\k\k\4\7\y\y\a\q\c\v\1\g\b\2\k\f\s\q\l\j\z\y\h\2\q\0\g\0\a\c\f\i\q\q\y\w\u\9\i\h\z\k\n\4\h\1\z\j\i\j\4\m\q\v\1\b\a\8\4\c\f\q\7\6\u\5\k\n\p\k\6\u\j\p\8\n\r\n\r\p\w\s\g\i\a\j\g\2\r\d\5\q\z\d\e\8\q\5\0\a\7\1\s\2\g\v\c\g\0\k\3\d\l\n\i\4\a\l\9\z\y\u\j\t\r\9\x\w\9\h\s\l\y\f\o\r\e\t\5\u\y\t\y\v\3\z\f\2\i\3\l\1\s\d\n\j\u\s\a\m\f\h\5\g\h\p\1\5\o ]] 00:18:31.231 12:15:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:31.231 12:15:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:31.489 [2024-04-26 12:15:24.731464] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:31.489 [2024-04-26 12:15:24.731575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63497 ] 00:18:31.489 [2024-04-26 12:15:24.871524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.747 [2024-04-26 12:15:24.998228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.006  Copying: 512/512 [B] (average 100 kBps) 00:18:32.006 00:18:32.006 12:15:25 -- dd/posix.sh@93 -- # [[ f6ei6as2bvrsqdyocsvonenjjxkxswld2ps41le1pir4vqgc0qvglhgu0ikewx6z9vkr1d70nix15iugomigy6e5jenrrv9uz9oqchyu203385swohur6p56snnomcnqvey5s7y8j6nat5y8oz2rplr18apqcqdlb3b2c3g0xtzks8rlxdvdgaz81aqdutlfili5kun2awa584i59hq3tfawpvdmrap4q62i6xrlg1uzuh5v9rxihbfw4ipks9un90kv2uq0xhrjwi6ejzb0h8qlrju71on9ciuovamm9c0hwfr23c56ssupvjf06dm6tl4zyq7b1q6coaqr6m5l7t7kk47yyaqcv1gb2kfsqljzyh2q0g0acfiqqywu9ihzkn4h1zjij4mqv1ba84cfq76u5knpk6ujp8nrnrpwsgiajg2rd5qzde8q50a71s2gvcg0k3dlni4al9zyujtr9xw9hslyforet5uytyv3zf2i3l1sdnjusamfh5ghp15o == \f\6\e\i\6\a\s\2\b\v\r\s\q\d\y\o\c\s\v\o\n\e\n\j\j\x\k\x\s\w\l\d\2\p\s\4\1\l\e\1\p\i\r\4\v\q\g\c\0\q\v\g\l\h\g\u\0\i\k\e\w\x\6\z\9\v\k\r\1\d\7\0\n\i\x\1\5\i\u\g\o\m\i\g\y\6\e\5\j\e\n\r\r\v\9\u\z\9\o\q\c\h\y\u\2\0\3\3\8\5\s\w\o\h\u\r\6\p\5\6\s\n\n\o\m\c\n\q\v\e\y\5\s\7\y\8\j\6\n\a\t\5\y\8\o\z\2\r\p\l\r\1\8\a\p\q\c\q\d\l\b\3\b\2\c\3\g\0\x\t\z\k\s\8\r\l\x\d\v\d\g\a\z\8\1\a\q\d\u\t\l\f\i\l\i\5\k\u\n\2\a\w\a\5\8\4\i\5\9\h\q\3\t\f\a\w\p\v\d\m\r\a\p\4\q\6\2\i\6\x\r\l\g\1\u\z\u\h\5\v\9\r\x\i\h\b\f\w\4\i\p\k\s\9\u\n\9\0\k\v\2\u\q\0\x\h\r\j\w\i\6\e\j\z\b\0\h\8\q\l\r\j\u\7\1\o\n\9\c\i\u\o\v\a\m\m\9\c\0\h\w\f\r\2\3\c\5\6\s\s\u\p\v\j\f\0\6\d\m\6\t\l\4\z\y\q\7\b\1\q\6\c\o\a\q\r\6\m\5\l\7\t\7\k\k\4\7\y\y\a\q\c\v\1\g\b\2\k\f\s\q\l\j\z\y\h\2\q\0\g\0\a\c\f\i\q\q\y\w\u\9\i\h\z\k\n\4\h\1\z\j\i\j\4\m\q\v\1\b\a\8\4\c\f\q\7\6\u\5\k\n\p\k\6\u\j\p\8\n\r\n\r\p\w\s\g\i\a\j\g\2\r\d\5\q\z\d\e\8\q\5\0\a\7\1\s\2\g\v\c\g\0\k\3\d\l\n\i\4\a\l\9\z\y\u\j\t\r\9\x\w\9\h\s\l\y\f\o\r\e\t\5\u\y\t\y\v\3\z\f\2\i\3\l\1\s\d\n\j\u\s\a\m\f\h\5\g\h\p\1\5\o ]] 00:18:32.006 12:15:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:32.006 12:15:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:32.006 [2024-04-26 12:15:25.412023] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:32.006 [2024-04-26 12:15:25.412133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63510 ] 00:18:32.265 [2024-04-26 12:15:25.544693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.265 [2024-04-26 12:15:25.656672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.779  Copying: 512/512 [B] (average 250 kBps) 00:18:32.779 00:18:32.780 12:15:26 -- dd/posix.sh@93 -- # [[ f6ei6as2bvrsqdyocsvonenjjxkxswld2ps41le1pir4vqgc0qvglhgu0ikewx6z9vkr1d70nix15iugomigy6e5jenrrv9uz9oqchyu203385swohur6p56snnomcnqvey5s7y8j6nat5y8oz2rplr18apqcqdlb3b2c3g0xtzks8rlxdvdgaz81aqdutlfili5kun2awa584i59hq3tfawpvdmrap4q62i6xrlg1uzuh5v9rxihbfw4ipks9un90kv2uq0xhrjwi6ejzb0h8qlrju71on9ciuovamm9c0hwfr23c56ssupvjf06dm6tl4zyq7b1q6coaqr6m5l7t7kk47yyaqcv1gb2kfsqljzyh2q0g0acfiqqywu9ihzkn4h1zjij4mqv1ba84cfq76u5knpk6ujp8nrnrpwsgiajg2rd5qzde8q50a71s2gvcg0k3dlni4al9zyujtr9xw9hslyforet5uytyv3zf2i3l1sdnjusamfh5ghp15o == \f\6\e\i\6\a\s\2\b\v\r\s\q\d\y\o\c\s\v\o\n\e\n\j\j\x\k\x\s\w\l\d\2\p\s\4\1\l\e\1\p\i\r\4\v\q\g\c\0\q\v\g\l\h\g\u\0\i\k\e\w\x\6\z\9\v\k\r\1\d\7\0\n\i\x\1\5\i\u\g\o\m\i\g\y\6\e\5\j\e\n\r\r\v\9\u\z\9\o\q\c\h\y\u\2\0\3\3\8\5\s\w\o\h\u\r\6\p\5\6\s\n\n\o\m\c\n\q\v\e\y\5\s\7\y\8\j\6\n\a\t\5\y\8\o\z\2\r\p\l\r\1\8\a\p\q\c\q\d\l\b\3\b\2\c\3\g\0\x\t\z\k\s\8\r\l\x\d\v\d\g\a\z\8\1\a\q\d\u\t\l\f\i\l\i\5\k\u\n\2\a\w\a\5\8\4\i\5\9\h\q\3\t\f\a\w\p\v\d\m\r\a\p\4\q\6\2\i\6\x\r\l\g\1\u\z\u\h\5\v\9\r\x\i\h\b\f\w\4\i\p\k\s\9\u\n\9\0\k\v\2\u\q\0\x\h\r\j\w\i\6\e\j\z\b\0\h\8\q\l\r\j\u\7\1\o\n\9\c\i\u\o\v\a\m\m\9\c\0\h\w\f\r\2\3\c\5\6\s\s\u\p\v\j\f\0\6\d\m\6\t\l\4\z\y\q\7\b\1\q\6\c\o\a\q\r\6\m\5\l\7\t\7\k\k\4\7\y\y\a\q\c\v\1\g\b\2\k\f\s\q\l\j\z\y\h\2\q\0\g\0\a\c\f\i\q\q\y\w\u\9\i\h\z\k\n\4\h\1\z\j\i\j\4\m\q\v\1\b\a\8\4\c\f\q\7\6\u\5\k\n\p\k\6\u\j\p\8\n\r\n\r\p\w\s\g\i\a\j\g\2\r\d\5\q\z\d\e\8\q\5\0\a\7\1\s\2\g\v\c\g\0\k\3\d\l\n\i\4\a\l\9\z\y\u\j\t\r\9\x\w\9\h\s\l\y\f\o\r\e\t\5\u\y\t\y\v\3\z\f\2\i\3\l\1\s\d\n\j\u\s\a\m\f\h\5\g\h\p\1\5\o ]] 00:18:32.780 12:15:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:32.780 12:15:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:32.780 12:15:26 -- dd/common.sh@98 -- # xtrace_disable 00:18:32.780 12:15:26 -- common/autotest_common.sh@10 -- # set +x 00:18:32.780 12:15:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:32.780 12:15:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:32.780 [2024-04-26 12:15:26.109778] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:32.780 [2024-04-26 12:15:26.109903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63517 ] 00:18:33.038 [2024-04-26 12:15:26.253132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.038 [2024-04-26 12:15:26.366914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.295  Copying: 512/512 [B] (average 500 kBps) 00:18:33.295 00:18:33.295 12:15:26 -- dd/posix.sh@93 -- # [[ 2eucsxndy3bee121pzlhf9zel1io84nc3s1ug5sed40cb671e7s3pczcn3cps1xdhuux899p78ndkvc1cpczby6egokndqk8gshn6g2fkj398kg8losjzafd55lyo992j19s08luw7wnx41p66hapqfzlzlmcxpv7fm2s8czsomtp99ad397h6rd1cye54eflf12zpzgr0pkvspbab0fuda0a6oihg2smp416kuddmbmsq7iha25unny84ji84ytx4avz0cvqm375uvzx7fuow9w2n5b0l925lf8rt3ogurn4dw8fzunbdg84wnvhfv3gxg9u68tdo2vq0i53cro6vht6xp5sbbb9t4ox5ptwwilsc8373qv98l3fjum45ze1z6fb1g6tkm53cg37ufzs9ao5fiyv2rsur777f4z39ysx662k5kaqe6lr2tgq9d4g9i123ml5c52542khw5w6de15qp8ho2qjt44km41ipssip865aueetznazcnii7v == \2\e\u\c\s\x\n\d\y\3\b\e\e\1\2\1\p\z\l\h\f\9\z\e\l\1\i\o\8\4\n\c\3\s\1\u\g\5\s\e\d\4\0\c\b\6\7\1\e\7\s\3\p\c\z\c\n\3\c\p\s\1\x\d\h\u\u\x\8\9\9\p\7\8\n\d\k\v\c\1\c\p\c\z\b\y\6\e\g\o\k\n\d\q\k\8\g\s\h\n\6\g\2\f\k\j\3\9\8\k\g\8\l\o\s\j\z\a\f\d\5\5\l\y\o\9\9\2\j\1\9\s\0\8\l\u\w\7\w\n\x\4\1\p\6\6\h\a\p\q\f\z\l\z\l\m\c\x\p\v\7\f\m\2\s\8\c\z\s\o\m\t\p\9\9\a\d\3\9\7\h\6\r\d\1\c\y\e\5\4\e\f\l\f\1\2\z\p\z\g\r\0\p\k\v\s\p\b\a\b\0\f\u\d\a\0\a\6\o\i\h\g\2\s\m\p\4\1\6\k\u\d\d\m\b\m\s\q\7\i\h\a\2\5\u\n\n\y\8\4\j\i\8\4\y\t\x\4\a\v\z\0\c\v\q\m\3\7\5\u\v\z\x\7\f\u\o\w\9\w\2\n\5\b\0\l\9\2\5\l\f\8\r\t\3\o\g\u\r\n\4\d\w\8\f\z\u\n\b\d\g\8\4\w\n\v\h\f\v\3\g\x\g\9\u\6\8\t\d\o\2\v\q\0\i\5\3\c\r\o\6\v\h\t\6\x\p\5\s\b\b\b\9\t\4\o\x\5\p\t\w\w\i\l\s\c\8\3\7\3\q\v\9\8\l\3\f\j\u\m\4\5\z\e\1\z\6\f\b\1\g\6\t\k\m\5\3\c\g\3\7\u\f\z\s\9\a\o\5\f\i\y\v\2\r\s\u\r\7\7\7\f\4\z\3\9\y\s\x\6\6\2\k\5\k\a\q\e\6\l\r\2\t\g\q\9\d\4\g\9\i\1\2\3\m\l\5\c\5\2\5\4\2\k\h\w\5\w\6\d\e\1\5\q\p\8\h\o\2\q\j\t\4\4\k\m\4\1\i\p\s\s\i\p\8\6\5\a\u\e\e\t\z\n\a\z\c\n\i\i\7\v ]] 00:18:33.295 12:15:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:33.295 12:15:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:33.552 [2024-04-26 12:15:26.763895] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:33.552 [2024-04-26 12:15:26.764023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63525 ] 00:18:33.552 [2024-04-26 12:15:26.899705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.552 [2024-04-26 12:15:27.015732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.075  Copying: 512/512 [B] (average 500 kBps) 00:18:34.075 00:18:34.075 12:15:27 -- dd/posix.sh@93 -- # [[ 2eucsxndy3bee121pzlhf9zel1io84nc3s1ug5sed40cb671e7s3pczcn3cps1xdhuux899p78ndkvc1cpczby6egokndqk8gshn6g2fkj398kg8losjzafd55lyo992j19s08luw7wnx41p66hapqfzlzlmcxpv7fm2s8czsomtp99ad397h6rd1cye54eflf12zpzgr0pkvspbab0fuda0a6oihg2smp416kuddmbmsq7iha25unny84ji84ytx4avz0cvqm375uvzx7fuow9w2n5b0l925lf8rt3ogurn4dw8fzunbdg84wnvhfv3gxg9u68tdo2vq0i53cro6vht6xp5sbbb9t4ox5ptwwilsc8373qv98l3fjum45ze1z6fb1g6tkm53cg37ufzs9ao5fiyv2rsur777f4z39ysx662k5kaqe6lr2tgq9d4g9i123ml5c52542khw5w6de15qp8ho2qjt44km41ipssip865aueetznazcnii7v == \2\e\u\c\s\x\n\d\y\3\b\e\e\1\2\1\p\z\l\h\f\9\z\e\l\1\i\o\8\4\n\c\3\s\1\u\g\5\s\e\d\4\0\c\b\6\7\1\e\7\s\3\p\c\z\c\n\3\c\p\s\1\x\d\h\u\u\x\8\9\9\p\7\8\n\d\k\v\c\1\c\p\c\z\b\y\6\e\g\o\k\n\d\q\k\8\g\s\h\n\6\g\2\f\k\j\3\9\8\k\g\8\l\o\s\j\z\a\f\d\5\5\l\y\o\9\9\2\j\1\9\s\0\8\l\u\w\7\w\n\x\4\1\p\6\6\h\a\p\q\f\z\l\z\l\m\c\x\p\v\7\f\m\2\s\8\c\z\s\o\m\t\p\9\9\a\d\3\9\7\h\6\r\d\1\c\y\e\5\4\e\f\l\f\1\2\z\p\z\g\r\0\p\k\v\s\p\b\a\b\0\f\u\d\a\0\a\6\o\i\h\g\2\s\m\p\4\1\6\k\u\d\d\m\b\m\s\q\7\i\h\a\2\5\u\n\n\y\8\4\j\i\8\4\y\t\x\4\a\v\z\0\c\v\q\m\3\7\5\u\v\z\x\7\f\u\o\w\9\w\2\n\5\b\0\l\9\2\5\l\f\8\r\t\3\o\g\u\r\n\4\d\w\8\f\z\u\n\b\d\g\8\4\w\n\v\h\f\v\3\g\x\g\9\u\6\8\t\d\o\2\v\q\0\i\5\3\c\r\o\6\v\h\t\6\x\p\5\s\b\b\b\9\t\4\o\x\5\p\t\w\w\i\l\s\c\8\3\7\3\q\v\9\8\l\3\f\j\u\m\4\5\z\e\1\z\6\f\b\1\g\6\t\k\m\5\3\c\g\3\7\u\f\z\s\9\a\o\5\f\i\y\v\2\r\s\u\r\7\7\7\f\4\z\3\9\y\s\x\6\6\2\k\5\k\a\q\e\6\l\r\2\t\g\q\9\d\4\g\9\i\1\2\3\m\l\5\c\5\2\5\4\2\k\h\w\5\w\6\d\e\1\5\q\p\8\h\o\2\q\j\t\4\4\k\m\4\1\i\p\s\s\i\p\8\6\5\a\u\e\e\t\z\n\a\z\c\n\i\i\7\v ]] 00:18:34.075 12:15:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:34.075 12:15:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:34.075 [2024-04-26 12:15:27.424653] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:34.075 [2024-04-26 12:15:27.424773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63538 ] 00:18:34.331 [2024-04-26 12:15:27.559641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.331 [2024-04-26 12:15:27.674319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.589  Copying: 512/512 [B] (average 250 kBps) 00:18:34.589 00:18:34.589 12:15:28 -- dd/posix.sh@93 -- # [[ 2eucsxndy3bee121pzlhf9zel1io84nc3s1ug5sed40cb671e7s3pczcn3cps1xdhuux899p78ndkvc1cpczby6egokndqk8gshn6g2fkj398kg8losjzafd55lyo992j19s08luw7wnx41p66hapqfzlzlmcxpv7fm2s8czsomtp99ad397h6rd1cye54eflf12zpzgr0pkvspbab0fuda0a6oihg2smp416kuddmbmsq7iha25unny84ji84ytx4avz0cvqm375uvzx7fuow9w2n5b0l925lf8rt3ogurn4dw8fzunbdg84wnvhfv3gxg9u68tdo2vq0i53cro6vht6xp5sbbb9t4ox5ptwwilsc8373qv98l3fjum45ze1z6fb1g6tkm53cg37ufzs9ao5fiyv2rsur777f4z39ysx662k5kaqe6lr2tgq9d4g9i123ml5c52542khw5w6de15qp8ho2qjt44km41ipssip865aueetznazcnii7v == \2\e\u\c\s\x\n\d\y\3\b\e\e\1\2\1\p\z\l\h\f\9\z\e\l\1\i\o\8\4\n\c\3\s\1\u\g\5\s\e\d\4\0\c\b\6\7\1\e\7\s\3\p\c\z\c\n\3\c\p\s\1\x\d\h\u\u\x\8\9\9\p\7\8\n\d\k\v\c\1\c\p\c\z\b\y\6\e\g\o\k\n\d\q\k\8\g\s\h\n\6\g\2\f\k\j\3\9\8\k\g\8\l\o\s\j\z\a\f\d\5\5\l\y\o\9\9\2\j\1\9\s\0\8\l\u\w\7\w\n\x\4\1\p\6\6\h\a\p\q\f\z\l\z\l\m\c\x\p\v\7\f\m\2\s\8\c\z\s\o\m\t\p\9\9\a\d\3\9\7\h\6\r\d\1\c\y\e\5\4\e\f\l\f\1\2\z\p\z\g\r\0\p\k\v\s\p\b\a\b\0\f\u\d\a\0\a\6\o\i\h\g\2\s\m\p\4\1\6\k\u\d\d\m\b\m\s\q\7\i\h\a\2\5\u\n\n\y\8\4\j\i\8\4\y\t\x\4\a\v\z\0\c\v\q\m\3\7\5\u\v\z\x\7\f\u\o\w\9\w\2\n\5\b\0\l\9\2\5\l\f\8\r\t\3\o\g\u\r\n\4\d\w\8\f\z\u\n\b\d\g\8\4\w\n\v\h\f\v\3\g\x\g\9\u\6\8\t\d\o\2\v\q\0\i\5\3\c\r\o\6\v\h\t\6\x\p\5\s\b\b\b\9\t\4\o\x\5\p\t\w\w\i\l\s\c\8\3\7\3\q\v\9\8\l\3\f\j\u\m\4\5\z\e\1\z\6\f\b\1\g\6\t\k\m\5\3\c\g\3\7\u\f\z\s\9\a\o\5\f\i\y\v\2\r\s\u\r\7\7\7\f\4\z\3\9\y\s\x\6\6\2\k\5\k\a\q\e\6\l\r\2\t\g\q\9\d\4\g\9\i\1\2\3\m\l\5\c\5\2\5\4\2\k\h\w\5\w\6\d\e\1\5\q\p\8\h\o\2\q\j\t\4\4\k\m\4\1\i\p\s\s\i\p\8\6\5\a\u\e\e\t\z\n\a\z\c\n\i\i\7\v ]] 00:18:34.589 12:15:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:34.589 12:15:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:34.847 [2024-04-26 12:15:28.083344] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:34.847 [2024-04-26 12:15:28.083449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63540 ] 00:18:34.847 [2024-04-26 12:15:28.219912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.105 [2024-04-26 12:15:28.326918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.363  Copying: 512/512 [B] (average 166 kBps) 00:18:35.363 00:18:35.363 ************************************ 00:18:35.363 END TEST dd_flags_misc_forced_aio 00:18:35.363 ************************************ 00:18:35.363 12:15:28 -- dd/posix.sh@93 -- # [[ 2eucsxndy3bee121pzlhf9zel1io84nc3s1ug5sed40cb671e7s3pczcn3cps1xdhuux899p78ndkvc1cpczby6egokndqk8gshn6g2fkj398kg8losjzafd55lyo992j19s08luw7wnx41p66hapqfzlzlmcxpv7fm2s8czsomtp99ad397h6rd1cye54eflf12zpzgr0pkvspbab0fuda0a6oihg2smp416kuddmbmsq7iha25unny84ji84ytx4avz0cvqm375uvzx7fuow9w2n5b0l925lf8rt3ogurn4dw8fzunbdg84wnvhfv3gxg9u68tdo2vq0i53cro6vht6xp5sbbb9t4ox5ptwwilsc8373qv98l3fjum45ze1z6fb1g6tkm53cg37ufzs9ao5fiyv2rsur777f4z39ysx662k5kaqe6lr2tgq9d4g9i123ml5c52542khw5w6de15qp8ho2qjt44km41ipssip865aueetznazcnii7v == \2\e\u\c\s\x\n\d\y\3\b\e\e\1\2\1\p\z\l\h\f\9\z\e\l\1\i\o\8\4\n\c\3\s\1\u\g\5\s\e\d\4\0\c\b\6\7\1\e\7\s\3\p\c\z\c\n\3\c\p\s\1\x\d\h\u\u\x\8\9\9\p\7\8\n\d\k\v\c\1\c\p\c\z\b\y\6\e\g\o\k\n\d\q\k\8\g\s\h\n\6\g\2\f\k\j\3\9\8\k\g\8\l\o\s\j\z\a\f\d\5\5\l\y\o\9\9\2\j\1\9\s\0\8\l\u\w\7\w\n\x\4\1\p\6\6\h\a\p\q\f\z\l\z\l\m\c\x\p\v\7\f\m\2\s\8\c\z\s\o\m\t\p\9\9\a\d\3\9\7\h\6\r\d\1\c\y\e\5\4\e\f\l\f\1\2\z\p\z\g\r\0\p\k\v\s\p\b\a\b\0\f\u\d\a\0\a\6\o\i\h\g\2\s\m\p\4\1\6\k\u\d\d\m\b\m\s\q\7\i\h\a\2\5\u\n\n\y\8\4\j\i\8\4\y\t\x\4\a\v\z\0\c\v\q\m\3\7\5\u\v\z\x\7\f\u\o\w\9\w\2\n\5\b\0\l\9\2\5\l\f\8\r\t\3\o\g\u\r\n\4\d\w\8\f\z\u\n\b\d\g\8\4\w\n\v\h\f\v\3\g\x\g\9\u\6\8\t\d\o\2\v\q\0\i\5\3\c\r\o\6\v\h\t\6\x\p\5\s\b\b\b\9\t\4\o\x\5\p\t\w\w\i\l\s\c\8\3\7\3\q\v\9\8\l\3\f\j\u\m\4\5\z\e\1\z\6\f\b\1\g\6\t\k\m\5\3\c\g\3\7\u\f\z\s\9\a\o\5\f\i\y\v\2\r\s\u\r\7\7\7\f\4\z\3\9\y\s\x\6\6\2\k\5\k\a\q\e\6\l\r\2\t\g\q\9\d\4\g\9\i\1\2\3\m\l\5\c\5\2\5\4\2\k\h\w\5\w\6\d\e\1\5\q\p\8\h\o\2\q\j\t\4\4\k\m\4\1\i\p\s\s\i\p\8\6\5\a\u\e\e\t\z\n\a\z\c\n\i\i\7\v ]] 00:18:35.363 00:18:35.363 real 0m5.350s 00:18:35.363 user 0m3.148s 00:18:35.363 sys 0m1.201s 00:18:35.363 12:15:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:35.363 12:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:35.363 12:15:28 -- dd/posix.sh@1 -- # cleanup 00:18:35.363 12:15:28 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:35.363 12:15:28 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:35.363 ************************************ 00:18:35.363 END TEST spdk_dd_posix 00:18:35.363 ************************************ 00:18:35.363 00:18:35.363 real 0m24.187s 00:18:35.363 user 0m12.840s 00:18:35.363 sys 0m7.094s 00:18:35.363 12:15:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:35.363 12:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:35.363 12:15:28 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:35.363 12:15:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:35.363 12:15:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:35.363 12:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:35.621 ************************************ 00:18:35.621 START TEST spdk_dd_malloc 00:18:35.621 ************************************ 00:18:35.621 12:15:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:18:35.621 * Looking for test storage... 00:18:35.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:35.621 12:15:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.621 12:15:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.621 12:15:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.621 12:15:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.621 12:15:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.621 12:15:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.621 12:15:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.621 12:15:28 -- paths/export.sh@5 -- # export PATH 00:18:35.621 12:15:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.621 12:15:28 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:18:35.621 12:15:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:35.621 12:15:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:35.621 12:15:28 -- common/autotest_common.sh@10 -- # set +x 00:18:35.621 ************************************ 00:18:35.621 START TEST dd_malloc_copy 00:18:35.621 ************************************ 00:18:35.621 12:15:29 -- common/autotest_common.sh@1111 -- # malloc_copy 00:18:35.621 12:15:29 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:18:35.621 12:15:29 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:18:35.621 12:15:29 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:18:35.621 12:15:29 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:18:35.621 12:15:29 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:18:35.621 12:15:29 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:18:35.621 12:15:29 -- dd/malloc.sh@28 -- # gen_conf 00:18:35.621 12:15:29 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:18:35.621 12:15:29 -- dd/common.sh@31 -- # xtrace_disable 00:18:35.621 12:15:29 -- common/autotest_common.sh@10 -- # set +x 00:18:35.621 [2024-04-26 12:15:29.070789] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:35.621 [2024-04-26 12:15:29.071082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63623 ] 00:18:35.621 { 00:18:35.621 "subsystems": [ 00:18:35.621 { 00:18:35.621 "subsystem": "bdev", 00:18:35.621 "config": [ 00:18:35.621 { 00:18:35.621 "params": { 00:18:35.621 "block_size": 512, 00:18:35.621 "num_blocks": 1048576, 00:18:35.621 "name": "malloc0" 00:18:35.621 }, 00:18:35.621 "method": "bdev_malloc_create" 00:18:35.621 }, 00:18:35.621 { 00:18:35.621 "params": { 00:18:35.621 "block_size": 512, 00:18:35.621 "num_blocks": 1048576, 00:18:35.621 "name": "malloc1" 00:18:35.621 }, 00:18:35.621 "method": "bdev_malloc_create" 00:18:35.621 }, 00:18:35.621 { 00:18:35.621 "method": "bdev_wait_for_examine" 00:18:35.621 } 00:18:35.621 ] 00:18:35.621 } 00:18:35.621 ] 00:18:35.621 } 00:18:35.878 [2024-04-26 12:15:29.205434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.878 [2024-04-26 12:15:29.316294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.454  Copying: 198/512 [MB] (198 MBps) Copying: 398/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:18:39.454 00:18:39.454 12:15:32 -- dd/malloc.sh@33 -- # gen_conf 00:18:39.454 12:15:32 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:18:39.454 12:15:32 -- dd/common.sh@31 -- # xtrace_disable 00:18:39.454 12:15:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.712 [2024-04-26 12:15:32.971908] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:39.712 [2024-04-26 12:15:32.972001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63676 ] 00:18:39.712 { 00:18:39.712 "subsystems": [ 00:18:39.712 { 00:18:39.712 "subsystem": "bdev", 00:18:39.712 "config": [ 00:18:39.712 { 00:18:39.712 "params": { 00:18:39.712 "block_size": 512, 00:18:39.712 "num_blocks": 1048576, 00:18:39.712 "name": "malloc0" 00:18:39.712 }, 00:18:39.712 "method": "bdev_malloc_create" 00:18:39.712 }, 00:18:39.712 { 00:18:39.712 "params": { 00:18:39.712 "block_size": 512, 00:18:39.712 "num_blocks": 1048576, 00:18:39.712 "name": "malloc1" 00:18:39.712 }, 00:18:39.712 "method": "bdev_malloc_create" 00:18:39.712 }, 00:18:39.712 { 00:18:39.712 "method": "bdev_wait_for_examine" 00:18:39.712 } 00:18:39.712 ] 00:18:39.712 } 00:18:39.712 ] 00:18:39.712 } 00:18:39.712 [2024-04-26 12:15:33.113043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.980 [2024-04-26 12:15:33.224354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.439  Copying: 201/512 [MB] (201 MBps) Copying: 407/512 [MB] (205 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:18:43.439 00:18:43.439 00:18:43.439 real 0m7.798s 00:18:43.439 user 0m6.773s 00:18:43.439 sys 0m0.854s 00:18:43.439 12:15:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:43.439 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:43.439 ************************************ 00:18:43.439 END TEST dd_malloc_copy 00:18:43.439 ************************************ 00:18:43.439 ************************************ 00:18:43.439 END TEST spdk_dd_malloc 00:18:43.439 ************************************ 00:18:43.439 00:18:43.439 real 0m8.018s 00:18:43.439 user 0m6.841s 00:18:43.439 sys 0m0.989s 00:18:43.439 12:15:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:43.439 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:43.696 12:15:36 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:18:43.696 12:15:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:43.696 12:15:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.696 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:43.696 ************************************ 00:18:43.696 START TEST spdk_dd_bdev_to_bdev 00:18:43.696 ************************************ 00:18:43.696 12:15:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:18:43.696 * Looking for test storage... 00:18:43.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:43.696 12:15:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.696 12:15:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.696 12:15:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.696 12:15:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.696 12:15:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.696 12:15:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.696 12:15:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.696 12:15:37 -- paths/export.sh@5 -- # export PATH 00:18:43.696 12:15:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:18:43.696 12:15:37 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:43.696 12:15:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:43.696 12:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.696 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:43.953 ************************************ 00:18:43.953 START TEST dd_inflate_file 00:18:43.953 ************************************ 00:18:43.953 12:15:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:18:43.953 [2024-04-26 12:15:37.227165] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:43.953 [2024-04-26 12:15:37.228210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63796 ] 00:18:43.953 [2024-04-26 12:15:37.368886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.210 [2024-04-26 12:15:37.472711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.467  Copying: 64/64 [MB] (average 1600 MBps) 00:18:44.467 00:18:44.467 ************************************ 00:18:44.467 END TEST dd_inflate_file 00:18:44.467 00:18:44.467 real 0m0.673s 00:18:44.467 user 0m0.415s 00:18:44.467 sys 0m0.310s 00:18:44.467 12:15:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:44.467 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:44.467 ************************************ 00:18:44.467 12:15:37 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:18:44.467 12:15:37 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:18:44.467 12:15:37 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:18:44.467 12:15:37 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:18:44.467 12:15:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:44.467 12:15:37 -- dd/common.sh@31 -- # xtrace_disable 00:18:44.467 12:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.467 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:44.467 12:15:37 -- common/autotest_common.sh@10 -- # set +x 00:18:44.725 { 00:18:44.725 "subsystems": [ 00:18:44.725 { 00:18:44.725 "subsystem": "bdev", 00:18:44.725 "config": [ 00:18:44.725 { 00:18:44.725 "params": { 00:18:44.725 "trtype": "pcie", 00:18:44.725 "traddr": "0000:00:10.0", 00:18:44.725 "name": "Nvme0" 00:18:44.725 }, 00:18:44.725 "method": "bdev_nvme_attach_controller" 00:18:44.725 }, 00:18:44.725 { 00:18:44.725 "params": { 00:18:44.725 "trtype": "pcie", 00:18:44.725 "traddr": "0000:00:11.0", 00:18:44.725 "name": "Nvme1" 00:18:44.725 }, 00:18:44.725 "method": "bdev_nvme_attach_controller" 00:18:44.725 }, 00:18:44.725 { 00:18:44.725 "method": "bdev_wait_for_examine" 00:18:44.725 } 00:18:44.725 ] 00:18:44.725 } 00:18:44.725 ] 00:18:44.725 } 00:18:44.725 ************************************ 00:18:44.725 START TEST dd_copy_to_out_bdev 00:18:44.725 ************************************ 00:18:44.725 12:15:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:18:44.725 [2024-04-26 12:15:38.025007] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:44.725 [2024-04-26 12:15:38.025115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63841 ] 00:18:44.725 [2024-04-26 12:15:38.165237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.982 [2024-04-26 12:15:38.278445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.354  Copying: 64/64 [MB] (average 67 MBps) 00:18:46.354 00:18:46.354 00:18:46.354 real 0m1.762s 00:18:46.354 user 0m1.470s 00:18:46.354 sys 0m1.305s 00:18:46.354 12:15:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.354 12:15:39 -- common/autotest_common.sh@10 -- # set +x 00:18:46.354 ************************************ 00:18:46.354 END TEST dd_copy_to_out_bdev 00:18:46.354 ************************************ 00:18:46.354 12:15:39 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:18:46.354 12:15:39 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:18:46.354 12:15:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:46.354 12:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.354 12:15:39 -- common/autotest_common.sh@10 -- # set +x 00:18:46.612 ************************************ 00:18:46.612 START TEST dd_offset_magic 00:18:46.612 ************************************ 00:18:46.612 12:15:39 -- common/autotest_common.sh@1111 -- # offset_magic 00:18:46.612 12:15:39 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:18:46.612 12:15:39 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:18:46.612 12:15:39 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:18:46.612 12:15:39 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:18:46.612 12:15:39 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:18:46.612 12:15:39 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:18:46.612 12:15:39 -- dd/common.sh@31 -- # xtrace_disable 00:18:46.612 12:15:39 -- common/autotest_common.sh@10 -- # set +x 00:18:46.612 [2024-04-26 12:15:39.888013] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:46.612 [2024-04-26 12:15:39.888091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63889 ] 00:18:46.612 { 00:18:46.612 "subsystems": [ 00:18:46.612 { 00:18:46.612 "subsystem": "bdev", 00:18:46.612 "config": [ 00:18:46.612 { 00:18:46.612 "params": { 00:18:46.612 "trtype": "pcie", 00:18:46.612 "traddr": "0000:00:10.0", 00:18:46.612 "name": "Nvme0" 00:18:46.612 }, 00:18:46.612 "method": "bdev_nvme_attach_controller" 00:18:46.612 }, 00:18:46.612 { 00:18:46.612 "params": { 00:18:46.612 "trtype": "pcie", 00:18:46.612 "traddr": "0000:00:11.0", 00:18:46.612 "name": "Nvme1" 00:18:46.612 }, 00:18:46.612 "method": "bdev_nvme_attach_controller" 00:18:46.612 }, 00:18:46.612 { 00:18:46.612 "method": "bdev_wait_for_examine" 00:18:46.612 } 00:18:46.612 ] 00:18:46.612 } 00:18:46.612 ] 00:18:46.612 } 00:18:46.612 [2024-04-26 12:15:40.023656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.870 [2024-04-26 12:15:40.126947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.392  Copying: 65/65 [MB] (average 955 MBps) 00:18:47.392 00:18:47.392 12:15:40 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:18:47.392 12:15:40 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:18:47.392 12:15:40 -- dd/common.sh@31 -- # xtrace_disable 00:18:47.392 12:15:40 -- common/autotest_common.sh@10 -- # set +x 00:18:47.392 [2024-04-26 12:15:40.768858] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:47.392 [2024-04-26 12:15:40.768960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63904 ] 00:18:47.392 { 00:18:47.392 "subsystems": [ 00:18:47.392 { 00:18:47.392 "subsystem": "bdev", 00:18:47.392 "config": [ 00:18:47.392 { 00:18:47.392 "params": { 00:18:47.392 "trtype": "pcie", 00:18:47.392 "traddr": "0000:00:10.0", 00:18:47.392 "name": "Nvme0" 00:18:47.392 }, 00:18:47.392 "method": "bdev_nvme_attach_controller" 00:18:47.392 }, 00:18:47.392 { 00:18:47.392 "params": { 00:18:47.392 "trtype": "pcie", 00:18:47.392 "traddr": "0000:00:11.0", 00:18:47.392 "name": "Nvme1" 00:18:47.392 }, 00:18:47.392 "method": "bdev_nvme_attach_controller" 00:18:47.392 }, 00:18:47.392 { 00:18:47.392 "method": "bdev_wait_for_examine" 00:18:47.392 } 00:18:47.392 ] 00:18:47.392 } 00:18:47.392 ] 00:18:47.392 } 00:18:47.651 [2024-04-26 12:15:40.905890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.651 [2024-04-26 12:15:41.020143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.166  Copying: 1024/1024 [kB] (average 1000 MBps) 00:18:48.166 00:18:48.166 12:15:41 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:18:48.166 12:15:41 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:18:48.166 12:15:41 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:18:48.166 12:15:41 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:18:48.166 12:15:41 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:18:48.166 12:15:41 -- dd/common.sh@31 -- # xtrace_disable 00:18:48.166 12:15:41 -- common/autotest_common.sh@10 -- # set +x 00:18:48.166 [2024-04-26 12:15:41.559440] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:48.166 [2024-04-26 12:15:41.560517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63920 ] 00:18:48.166 { 00:18:48.166 "subsystems": [ 00:18:48.166 { 00:18:48.166 "subsystem": "bdev", 00:18:48.166 "config": [ 00:18:48.166 { 00:18:48.166 "params": { 00:18:48.166 "trtype": "pcie", 00:18:48.166 "traddr": "0000:00:10.0", 00:18:48.166 "name": "Nvme0" 00:18:48.166 }, 00:18:48.166 "method": "bdev_nvme_attach_controller" 00:18:48.166 }, 00:18:48.166 { 00:18:48.166 "params": { 00:18:48.166 "trtype": "pcie", 00:18:48.166 "traddr": "0000:00:11.0", 00:18:48.166 "name": "Nvme1" 00:18:48.166 }, 00:18:48.166 "method": "bdev_nvme_attach_controller" 00:18:48.166 }, 00:18:48.166 { 00:18:48.166 "method": "bdev_wait_for_examine" 00:18:48.166 } 00:18:48.166 ] 00:18:48.166 } 00:18:48.166 ] 00:18:48.166 } 00:18:48.424 [2024-04-26 12:15:41.700490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.424 [2024-04-26 12:15:41.797102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.258  Copying: 65/65 [MB] (average 1031 MBps) 00:18:49.258 00:18:49.258 12:15:42 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:18:49.258 12:15:42 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:18:49.258 12:15:42 -- dd/common.sh@31 -- # xtrace_disable 00:18:49.258 12:15:42 -- common/autotest_common.sh@10 -- # set +x 00:18:49.258 [2024-04-26 12:15:42.486222] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:49.258 [2024-04-26 12:15:42.486296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:18:49.258 { 00:18:49.258 "subsystems": [ 00:18:49.258 { 00:18:49.258 "subsystem": "bdev", 00:18:49.258 "config": [ 00:18:49.258 { 00:18:49.258 "params": { 00:18:49.258 "trtype": "pcie", 00:18:49.258 "traddr": "0000:00:10.0", 00:18:49.258 "name": "Nvme0" 00:18:49.258 }, 00:18:49.258 "method": "bdev_nvme_attach_controller" 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "params": { 00:18:49.258 "trtype": "pcie", 00:18:49.258 "traddr": "0000:00:11.0", 00:18:49.258 "name": "Nvme1" 00:18:49.258 }, 00:18:49.258 "method": "bdev_nvme_attach_controller" 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_wait_for_examine" 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 } 00:18:49.258 [2024-04-26 12:15:42.618704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.522 [2024-04-26 12:15:42.728826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.780  Copying: 1024/1024 [kB] (average 500 MBps) 00:18:49.780 00:18:49.780 12:15:43 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:18:49.780 12:15:43 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:18:49.780 00:18:49.780 real 0m3.351s 00:18:49.780 user 0m2.507s 00:18:49.780 sys 0m0.918s 00:18:49.780 12:15:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:49.780 ************************************ 00:18:49.780 END TEST dd_offset_magic 00:18:49.780 ************************************ 00:18:49.780 12:15:43 -- common/autotest_common.sh@10 -- # set +x 00:18:49.780 12:15:43 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:18:49.780 12:15:43 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:18:49.780 12:15:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:18:49.780 12:15:43 -- dd/common.sh@11 -- # local nvme_ref= 00:18:49.780 12:15:43 -- dd/common.sh@12 -- # local size=4194330 00:18:49.780 12:15:43 -- dd/common.sh@14 -- # local bs=1048576 00:18:49.780 12:15:43 -- dd/common.sh@15 -- # local count=5 00:18:49.780 12:15:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:18:49.780 12:15:43 -- dd/common.sh@18 -- # gen_conf 00:18:49.780 12:15:43 -- dd/common.sh@31 -- # xtrace_disable 00:18:49.780 12:15:43 -- common/autotest_common.sh@10 -- # set +x 00:18:50.046 [2024-04-26 12:15:43.289797] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:50.046 [2024-04-26 12:15:43.290058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63977 ] 00:18:50.046 { 00:18:50.046 "subsystems": [ 00:18:50.046 { 00:18:50.046 "subsystem": "bdev", 00:18:50.046 "config": [ 00:18:50.046 { 00:18:50.046 "params": { 00:18:50.046 "trtype": "pcie", 00:18:50.046 "traddr": "0000:00:10.0", 00:18:50.046 "name": "Nvme0" 00:18:50.046 }, 00:18:50.046 "method": "bdev_nvme_attach_controller" 00:18:50.046 }, 00:18:50.046 { 00:18:50.046 "params": { 00:18:50.046 "trtype": "pcie", 00:18:50.046 "traddr": "0000:00:11.0", 00:18:50.046 "name": "Nvme1" 00:18:50.046 }, 00:18:50.046 "method": "bdev_nvme_attach_controller" 00:18:50.046 }, 00:18:50.046 { 00:18:50.046 "method": "bdev_wait_for_examine" 00:18:50.046 } 00:18:50.046 ] 00:18:50.046 } 00:18:50.046 ] 00:18:50.046 } 00:18:50.046 [2024-04-26 12:15:43.431362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.306 [2024-04-26 12:15:43.547716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.834  Copying: 5120/5120 [kB] (average 1000 MBps) 00:18:50.834 00:18:50.834 12:15:44 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:18:50.834 12:15:44 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:18:50.834 12:15:44 -- dd/common.sh@11 -- # local nvme_ref= 00:18:50.834 12:15:44 -- dd/common.sh@12 -- # local size=4194330 00:18:50.834 12:15:44 -- dd/common.sh@14 -- # local bs=1048576 00:18:50.834 12:15:44 -- dd/common.sh@15 -- # local count=5 00:18:50.834 12:15:44 -- dd/common.sh@18 -- # gen_conf 00:18:50.834 12:15:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:18:50.834 12:15:44 -- dd/common.sh@31 -- # xtrace_disable 00:18:50.834 12:15:44 -- common/autotest_common.sh@10 -- # set +x 00:18:50.834 [2024-04-26 12:15:44.084889] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:50.834 [2024-04-26 12:15:44.084978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63997 ] 00:18:50.834 { 00:18:50.834 "subsystems": [ 00:18:50.834 { 00:18:50.834 "subsystem": "bdev", 00:18:50.834 "config": [ 00:18:50.834 { 00:18:50.834 "params": { 00:18:50.834 "trtype": "pcie", 00:18:50.834 "traddr": "0000:00:10.0", 00:18:50.834 "name": "Nvme0" 00:18:50.834 }, 00:18:50.834 "method": "bdev_nvme_attach_controller" 00:18:50.834 }, 00:18:50.834 { 00:18:50.834 "params": { 00:18:50.834 "trtype": "pcie", 00:18:50.834 "traddr": "0000:00:11.0", 00:18:50.834 "name": "Nvme1" 00:18:50.834 }, 00:18:50.834 "method": "bdev_nvme_attach_controller" 00:18:50.834 }, 00:18:50.834 { 00:18:50.834 "method": "bdev_wait_for_examine" 00:18:50.834 } 00:18:50.834 ] 00:18:50.834 } 00:18:50.834 ] 00:18:50.834 } 00:18:50.834 [2024-04-26 12:15:44.224360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.097 [2024-04-26 12:15:44.324888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.361  Copying: 5120/5120 [kB] (average 714 MBps) 00:18:51.361 00:18:51.361 12:15:44 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:18:51.620 ************************************ 00:18:51.620 END TEST spdk_dd_bdev_to_bdev 00:18:51.620 ************************************ 00:18:51.620 00:18:51.620 real 0m7.836s 00:18:51.620 user 0m5.750s 00:18:51.620 sys 0m3.346s 00:18:51.620 12:15:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:51.620 12:15:44 -- common/autotest_common.sh@10 -- # set +x 00:18:51.620 12:15:44 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:18:51.620 12:15:44 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:18:51.620 12:15:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:51.620 12:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.620 12:15:44 -- common/autotest_common.sh@10 -- # set +x 00:18:51.620 ************************************ 00:18:51.620 START TEST spdk_dd_uring 00:18:51.620 ************************************ 00:18:51.620 12:15:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:18:51.620 * Looking for test storage... 00:18:51.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:51.620 12:15:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:51.620 12:15:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.620 12:15:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.620 12:15:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.620 12:15:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.620 12:15:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.620 12:15:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.620 12:15:45 -- paths/export.sh@5 -- # export PATH 00:18:51.620 12:15:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.620 12:15:45 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:18:51.620 12:15:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:51.620 12:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.620 12:15:45 -- common/autotest_common.sh@10 -- # set +x 00:18:51.878 ************************************ 00:18:51.878 START TEST dd_uring_copy 00:18:51.878 ************************************ 00:18:51.879 12:15:45 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:18:51.879 12:15:45 -- dd/uring.sh@15 -- # local zram_dev_id 00:18:51.879 12:15:45 -- dd/uring.sh@16 -- # local magic 00:18:51.879 12:15:45 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:18:51.879 12:15:45 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:18:51.879 12:15:45 -- dd/uring.sh@19 -- # local verify_magic 00:18:51.879 12:15:45 -- dd/uring.sh@21 -- # init_zram 00:18:51.879 12:15:45 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:18:51.879 12:15:45 -- dd/common.sh@164 -- # return 00:18:51.879 12:15:45 -- dd/uring.sh@22 -- # create_zram_dev 00:18:51.879 12:15:45 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:18:51.879 12:15:45 -- dd/uring.sh@22 -- # zram_dev_id=1 00:18:51.879 12:15:45 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:18:51.879 12:15:45 -- dd/common.sh@181 -- # local id=1 00:18:51.879 12:15:45 -- dd/common.sh@182 -- # local size=512M 00:18:51.879 12:15:45 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:18:51.879 12:15:45 -- dd/common.sh@186 -- # echo 512M 00:18:51.879 12:15:45 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:18:51.879 12:15:45 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:18:51.879 12:15:45 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:18:51.879 12:15:45 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:18:51.879 12:15:45 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:18:51.879 12:15:45 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:18:51.879 12:15:45 -- dd/uring.sh@41 -- # gen_bytes 1024 00:18:51.879 12:15:45 -- dd/common.sh@98 -- # xtrace_disable 00:18:51.879 12:15:45 -- common/autotest_common.sh@10 -- # set +x 00:18:51.879 12:15:45 -- dd/uring.sh@41 -- # magic=jbt0uhce6o4kyktezjseshafxyefy40nck0cb5hmfdpms3jrcdi8dzv1jvxvm2gdbyivmyvisoo4skotr7z8gwyqwxe13k0dcgl2yio27g9s5olgo7t4m68fv0bb029ds8t14w442l1hub6i39czbbz7sfgs2telq7bgptsri4pymlgwjjiz5eb23o480094trcrryxv1qfb1jptkbqjyqgpn48xsneydr1bnvmamkb8na08v4pjrkwkaurhit4j05bbd90g0n5ic0d7iu0p7dt1567td3zxs78r1aygq48w48d2jhs9dw6eeookmaimalqe5rkfwtoku8a6myd9m8zduxr0ipuw1u8kd64p9wk0yl3fld3lvcgxscixlqyk9tnaxhmm4vxah65kr5fzkes70v2bxhpu52e5j9mm6d2fxmzo7b4ymv16evzb01mmw05ye80bhqan7pfzqg77zfe7gif41bw7iut2pve9t5cbiq6kz7iwdlusonyl3nqfma0zse8le2mxwyjhshm1w6g6k5hp59hmpd7cn8sc6jtyvkfjtgq0g7vsm4295jfmnj8rkpiepslplikij7udsviiwu32omjqiqvbymka1xk0nakrfzkj4adqniqqewuk5d8pc2nw0lmcmexmfslg78mz106zbu5589xlm9b6sub8sgjhefo6doziecoqcyao85qbtlzu5xtkpjrkun1z2iev5eblluub5y8xtkwfrh2n91ekm1sc8bwas5n66dbakd6nxknjimkfjoj8hl3b68kvj7i55tauhtc9oes1xoloxquh8m4jzc5kcuts0gmt6lyhkchumt1mo48lxmynxaeu3045zq5lhtrr3uampgyr4kqo0k3vad3xe5fg5ainkb02mfoj57v7viqsqr7k4j2ujlw0mmo8bg8qlu6m6vnbyn66bs515r3yp0ufvchicv34mbmpvnoh816onffo0pjokea8p5hh2pd4nad71gctopa3bf1vbk4hs8jak9sn 00:18:51.879 12:15:45 -- dd/uring.sh@42 -- # echo jbt0uhce6o4kyktezjseshafxyefy40nck0cb5hmfdpms3jrcdi8dzv1jvxvm2gdbyivmyvisoo4skotr7z8gwyqwxe13k0dcgl2yio27g9s5olgo7t4m68fv0bb029ds8t14w442l1hub6i39czbbz7sfgs2telq7bgptsri4pymlgwjjiz5eb23o480094trcrryxv1qfb1jptkbqjyqgpn48xsneydr1bnvmamkb8na08v4pjrkwkaurhit4j05bbd90g0n5ic0d7iu0p7dt1567td3zxs78r1aygq48w48d2jhs9dw6eeookmaimalqe5rkfwtoku8a6myd9m8zduxr0ipuw1u8kd64p9wk0yl3fld3lvcgxscixlqyk9tnaxhmm4vxah65kr5fzkes70v2bxhpu52e5j9mm6d2fxmzo7b4ymv16evzb01mmw05ye80bhqan7pfzqg77zfe7gif41bw7iut2pve9t5cbiq6kz7iwdlusonyl3nqfma0zse8le2mxwyjhshm1w6g6k5hp59hmpd7cn8sc6jtyvkfjtgq0g7vsm4295jfmnj8rkpiepslplikij7udsviiwu32omjqiqvbymka1xk0nakrfzkj4adqniqqewuk5d8pc2nw0lmcmexmfslg78mz106zbu5589xlm9b6sub8sgjhefo6doziecoqcyao85qbtlzu5xtkpjrkun1z2iev5eblluub5y8xtkwfrh2n91ekm1sc8bwas5n66dbakd6nxknjimkfjoj8hl3b68kvj7i55tauhtc9oes1xoloxquh8m4jzc5kcuts0gmt6lyhkchumt1mo48lxmynxaeu3045zq5lhtrr3uampgyr4kqo0k3vad3xe5fg5ainkb02mfoj57v7viqsqr7k4j2ujlw0mmo8bg8qlu6m6vnbyn66bs515r3yp0ufvchicv34mbmpvnoh816onffo0pjokea8p5hh2pd4nad71gctopa3bf1vbk4hs8jak9sn 00:18:51.879 12:15:45 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:18:51.879 [2024-04-26 12:15:45.208682] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:51.879 [2024-04-26 12:15:45.208982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64077 ] 00:18:51.879 [2024-04-26 12:15:45.342380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.137 [2024-04-26 12:15:45.450409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.329  Copying: 511/511 [MB] (average 1110 MBps) 00:18:53.329 00:18:53.330 12:15:46 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:18:53.330 12:15:46 -- dd/uring.sh@54 -- # gen_conf 00:18:53.330 12:15:46 -- dd/common.sh@31 -- # xtrace_disable 00:18:53.330 12:15:46 -- common/autotest_common.sh@10 -- # set +x 00:18:53.330 [2024-04-26 12:15:46.699419] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:53.330 [2024-04-26 12:15:46.699513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64099 ] 00:18:53.330 { 00:18:53.330 "subsystems": [ 00:18:53.330 { 00:18:53.330 "subsystem": "bdev", 00:18:53.330 "config": [ 00:18:53.330 { 00:18:53.330 "params": { 00:18:53.330 "block_size": 512, 00:18:53.330 "num_blocks": 1048576, 00:18:53.330 "name": "malloc0" 00:18:53.330 }, 00:18:53.330 "method": "bdev_malloc_create" 00:18:53.330 }, 00:18:53.330 { 00:18:53.330 "params": { 00:18:53.330 "filename": "/dev/zram1", 00:18:53.330 "name": "uring0" 00:18:53.330 }, 00:18:53.330 "method": "bdev_uring_create" 00:18:53.330 }, 00:18:53.330 { 00:18:53.330 "method": "bdev_wait_for_examine" 00:18:53.330 } 00:18:53.330 ] 00:18:53.330 } 00:18:53.330 ] 00:18:53.330 } 00:18:53.588 [2024-04-26 12:15:46.838337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.588 [2024-04-26 12:15:46.958284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.745  Copying: 222/512 [MB] (222 MBps) Copying: 451/512 [MB] (229 MBps) Copying: 512/512 [MB] (average 225 MBps) 00:18:56.745 00:18:56.745 12:15:49 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:18:56.745 12:15:49 -- dd/uring.sh@60 -- # gen_conf 00:18:56.745 12:15:49 -- dd/common.sh@31 -- # xtrace_disable 00:18:56.745 12:15:49 -- common/autotest_common.sh@10 -- # set +x 00:18:56.745 [2024-04-26 12:15:49.986628] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:18:56.745 [2024-04-26 12:15:49.986739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:18:56.745 { 00:18:56.745 "subsystems": [ 00:18:56.745 { 00:18:56.745 "subsystem": "bdev", 00:18:56.745 "config": [ 00:18:56.745 { 00:18:56.745 "params": { 00:18:56.745 "block_size": 512, 00:18:56.745 "num_blocks": 1048576, 00:18:56.745 "name": "malloc0" 00:18:56.745 }, 00:18:56.745 "method": "bdev_malloc_create" 00:18:56.745 }, 00:18:56.745 { 00:18:56.745 "params": { 00:18:56.745 "filename": "/dev/zram1", 00:18:56.745 "name": "uring0" 00:18:56.745 }, 00:18:56.745 "method": "bdev_uring_create" 00:18:56.745 }, 00:18:56.745 { 00:18:56.745 "method": "bdev_wait_for_examine" 00:18:56.745 } 00:18:56.745 ] 00:18:56.745 } 00:18:56.745 ] 00:18:56.746 } 00:18:56.746 [2024-04-26 12:15:50.125506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.004 [2024-04-26 12:15:50.241545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.442  Copying: 188/512 [MB] (188 MBps) Copying: 367/512 [MB] (179 MBps) Copying: 512/512 [MB] (average 181 MBps) 00:19:00.442 00:19:00.442 12:15:53 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:19:00.442 12:15:53 -- dd/uring.sh@66 -- # [[ jbt0uhce6o4kyktezjseshafxyefy40nck0cb5hmfdpms3jrcdi8dzv1jvxvm2gdbyivmyvisoo4skotr7z8gwyqwxe13k0dcgl2yio27g9s5olgo7t4m68fv0bb029ds8t14w442l1hub6i39czbbz7sfgs2telq7bgptsri4pymlgwjjiz5eb23o480094trcrryxv1qfb1jptkbqjyqgpn48xsneydr1bnvmamkb8na08v4pjrkwkaurhit4j05bbd90g0n5ic0d7iu0p7dt1567td3zxs78r1aygq48w48d2jhs9dw6eeookmaimalqe5rkfwtoku8a6myd9m8zduxr0ipuw1u8kd64p9wk0yl3fld3lvcgxscixlqyk9tnaxhmm4vxah65kr5fzkes70v2bxhpu52e5j9mm6d2fxmzo7b4ymv16evzb01mmw05ye80bhqan7pfzqg77zfe7gif41bw7iut2pve9t5cbiq6kz7iwdlusonyl3nqfma0zse8le2mxwyjhshm1w6g6k5hp59hmpd7cn8sc6jtyvkfjtgq0g7vsm4295jfmnj8rkpiepslplikij7udsviiwu32omjqiqvbymka1xk0nakrfzkj4adqniqqewuk5d8pc2nw0lmcmexmfslg78mz106zbu5589xlm9b6sub8sgjhefo6doziecoqcyao85qbtlzu5xtkpjrkun1z2iev5eblluub5y8xtkwfrh2n91ekm1sc8bwas5n66dbakd6nxknjimkfjoj8hl3b68kvj7i55tauhtc9oes1xoloxquh8m4jzc5kcuts0gmt6lyhkchumt1mo48lxmynxaeu3045zq5lhtrr3uampgyr4kqo0k3vad3xe5fg5ainkb02mfoj57v7viqsqr7k4j2ujlw0mmo8bg8qlu6m6vnbyn66bs515r3yp0ufvchicv34mbmpvnoh816onffo0pjokea8p5hh2pd4nad71gctopa3bf1vbk4hs8jak9sn == \j\b\t\0\u\h\c\e\6\o\4\k\y\k\t\e\z\j\s\e\s\h\a\f\x\y\e\f\y\4\0\n\c\k\0\c\b\5\h\m\f\d\p\m\s\3\j\r\c\d\i\8\d\z\v\1\j\v\x\v\m\2\g\d\b\y\i\v\m\y\v\i\s\o\o\4\s\k\o\t\r\7\z\8\g\w\y\q\w\x\e\1\3\k\0\d\c\g\l\2\y\i\o\2\7\g\9\s\5\o\l\g\o\7\t\4\m\6\8\f\v\0\b\b\0\2\9\d\s\8\t\1\4\w\4\4\2\l\1\h\u\b\6\i\3\9\c\z\b\b\z\7\s\f\g\s\2\t\e\l\q\7\b\g\p\t\s\r\i\4\p\y\m\l\g\w\j\j\i\z\5\e\b\2\3\o\4\8\0\0\9\4\t\r\c\r\r\y\x\v\1\q\f\b\1\j\p\t\k\b\q\j\y\q\g\p\n\4\8\x\s\n\e\y\d\r\1\b\n\v\m\a\m\k\b\8\n\a\0\8\v\4\p\j\r\k\w\k\a\u\r\h\i\t\4\j\0\5\b\b\d\9\0\g\0\n\5\i\c\0\d\7\i\u\0\p\7\d\t\1\5\6\7\t\d\3\z\x\s\7\8\r\1\a\y\g\q\4\8\w\4\8\d\2\j\h\s\9\d\w\6\e\e\o\o\k\m\a\i\m\a\l\q\e\5\r\k\f\w\t\o\k\u\8\a\6\m\y\d\9\m\8\z\d\u\x\r\0\i\p\u\w\1\u\8\k\d\6\4\p\9\w\k\0\y\l\3\f\l\d\3\l\v\c\g\x\s\c\i\x\l\q\y\k\9\t\n\a\x\h\m\m\4\v\x\a\h\6\5\k\r\5\f\z\k\e\s\7\0\v\2\b\x\h\p\u\5\2\e\5\j\9\m\m\6\d\2\f\x\m\z\o\7\b\4\y\m\v\1\6\e\v\z\b\0\1\m\m\w\0\5\y\e\8\0\b\h\q\a\n\7\p\f\z\q\g\7\7\z\f\e\7\g\i\f\4\1\b\w\7\i\u\t\2\p\v\e\9\t\5\c\b\i\q\6\k\z\7\i\w\d\l\u\s\o\n\y\l\3\n\q\f\m\a\0\z\s\e\8\l\e\2\m\x\w\y\j\h\s\h\m\1\w\6\g\6\k\5\h\p\5\9\h\m\p\d\7\c\n\8\s\c\6\j\t\y\v\k\f\j\t\g\q\0\g\7\v\s\m\4\2\9\5\j\f\m\n\j\8\r\k\p\i\e\p\s\l\p\l\i\k\i\j\7\u\d\s\v\i\i\w\u\3\2\o\m\j\q\i\q\v\b\y\m\k\a\1\x\k\0\n\a\k\r\f\z\k\j\4\a\d\q\n\i\q\q\e\w\u\k\5\d\8\p\c\2\n\w\0\l\m\c\m\e\x\m\f\s\l\g\7\8\m\z\1\0\6\z\b\u\5\5\8\9\x\l\m\9\b\6\s\u\b\8\s\g\j\h\e\f\o\6\d\o\z\i\e\c\o\q\c\y\a\o\8\5\q\b\t\l\z\u\5\x\t\k\p\j\r\k\u\n\1\z\2\i\e\v\5\e\b\l\l\u\u\b\5\y\8\x\t\k\w\f\r\h\2\n\9\1\e\k\m\1\s\c\8\b\w\a\s\5\n\6\6\d\b\a\k\d\6\n\x\k\n\j\i\m\k\f\j\o\j\8\h\l\3\b\6\8\k\v\j\7\i\5\5\t\a\u\h\t\c\9\o\e\s\1\x\o\l\o\x\q\u\h\8\m\4\j\z\c\5\k\c\u\t\s\0\g\m\t\6\l\y\h\k\c\h\u\m\t\1\m\o\4\8\l\x\m\y\n\x\a\e\u\3\0\4\5\z\q\5\l\h\t\r\r\3\u\a\m\p\g\y\r\4\k\q\o\0\k\3\v\a\d\3\x\e\5\f\g\5\a\i\n\k\b\0\2\m\f\o\j\5\7\v\7\v\i\q\s\q\r\7\k\4\j\2\u\j\l\w\0\m\m\o\8\b\g\8\q\l\u\6\m\6\v\n\b\y\n\6\6\b\s\5\1\5\r\3\y\p\0\u\f\v\c\h\i\c\v\3\4\m\b\m\p\v\n\o\h\8\1\6\o\n\f\f\o\0\p\j\o\k\e\a\8\p\5\h\h\2\p\d\4\n\a\d\7\1\g\c\t\o\p\a\3\b\f\1\v\b\k\4\h\s\8\j\a\k\9\s\n ]] 00:19:00.442 12:15:53 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:19:00.442 12:15:53 -- dd/uring.sh@69 -- # [[ jbt0uhce6o4kyktezjseshafxyefy40nck0cb5hmfdpms3jrcdi8dzv1jvxvm2gdbyivmyvisoo4skotr7z8gwyqwxe13k0dcgl2yio27g9s5olgo7t4m68fv0bb029ds8t14w442l1hub6i39czbbz7sfgs2telq7bgptsri4pymlgwjjiz5eb23o480094trcrryxv1qfb1jptkbqjyqgpn48xsneydr1bnvmamkb8na08v4pjrkwkaurhit4j05bbd90g0n5ic0d7iu0p7dt1567td3zxs78r1aygq48w48d2jhs9dw6eeookmaimalqe5rkfwtoku8a6myd9m8zduxr0ipuw1u8kd64p9wk0yl3fld3lvcgxscixlqyk9tnaxhmm4vxah65kr5fzkes70v2bxhpu52e5j9mm6d2fxmzo7b4ymv16evzb01mmw05ye80bhqan7pfzqg77zfe7gif41bw7iut2pve9t5cbiq6kz7iwdlusonyl3nqfma0zse8le2mxwyjhshm1w6g6k5hp59hmpd7cn8sc6jtyvkfjtgq0g7vsm4295jfmnj8rkpiepslplikij7udsviiwu32omjqiqvbymka1xk0nakrfzkj4adqniqqewuk5d8pc2nw0lmcmexmfslg78mz106zbu5589xlm9b6sub8sgjhefo6doziecoqcyao85qbtlzu5xtkpjrkun1z2iev5eblluub5y8xtkwfrh2n91ekm1sc8bwas5n66dbakd6nxknjimkfjoj8hl3b68kvj7i55tauhtc9oes1xoloxquh8m4jzc5kcuts0gmt6lyhkchumt1mo48lxmynxaeu3045zq5lhtrr3uampgyr4kqo0k3vad3xe5fg5ainkb02mfoj57v7viqsqr7k4j2ujlw0mmo8bg8qlu6m6vnbyn66bs515r3yp0ufvchicv34mbmpvnoh816onffo0pjokea8p5hh2pd4nad71gctopa3bf1vbk4hs8jak9sn == \j\b\t\0\u\h\c\e\6\o\4\k\y\k\t\e\z\j\s\e\s\h\a\f\x\y\e\f\y\4\0\n\c\k\0\c\b\5\h\m\f\d\p\m\s\3\j\r\c\d\i\8\d\z\v\1\j\v\x\v\m\2\g\d\b\y\i\v\m\y\v\i\s\o\o\4\s\k\o\t\r\7\z\8\g\w\y\q\w\x\e\1\3\k\0\d\c\g\l\2\y\i\o\2\7\g\9\s\5\o\l\g\o\7\t\4\m\6\8\f\v\0\b\b\0\2\9\d\s\8\t\1\4\w\4\4\2\l\1\h\u\b\6\i\3\9\c\z\b\b\z\7\s\f\g\s\2\t\e\l\q\7\b\g\p\t\s\r\i\4\p\y\m\l\g\w\j\j\i\z\5\e\b\2\3\o\4\8\0\0\9\4\t\r\c\r\r\y\x\v\1\q\f\b\1\j\p\t\k\b\q\j\y\q\g\p\n\4\8\x\s\n\e\y\d\r\1\b\n\v\m\a\m\k\b\8\n\a\0\8\v\4\p\j\r\k\w\k\a\u\r\h\i\t\4\j\0\5\b\b\d\9\0\g\0\n\5\i\c\0\d\7\i\u\0\p\7\d\t\1\5\6\7\t\d\3\z\x\s\7\8\r\1\a\y\g\q\4\8\w\4\8\d\2\j\h\s\9\d\w\6\e\e\o\o\k\m\a\i\m\a\l\q\e\5\r\k\f\w\t\o\k\u\8\a\6\m\y\d\9\m\8\z\d\u\x\r\0\i\p\u\w\1\u\8\k\d\6\4\p\9\w\k\0\y\l\3\f\l\d\3\l\v\c\g\x\s\c\i\x\l\q\y\k\9\t\n\a\x\h\m\m\4\v\x\a\h\6\5\k\r\5\f\z\k\e\s\7\0\v\2\b\x\h\p\u\5\2\e\5\j\9\m\m\6\d\2\f\x\m\z\o\7\b\4\y\m\v\1\6\e\v\z\b\0\1\m\m\w\0\5\y\e\8\0\b\h\q\a\n\7\p\f\z\q\g\7\7\z\f\e\7\g\i\f\4\1\b\w\7\i\u\t\2\p\v\e\9\t\5\c\b\i\q\6\k\z\7\i\w\d\l\u\s\o\n\y\l\3\n\q\f\m\a\0\z\s\e\8\l\e\2\m\x\w\y\j\h\s\h\m\1\w\6\g\6\k\5\h\p\5\9\h\m\p\d\7\c\n\8\s\c\6\j\t\y\v\k\f\j\t\g\q\0\g\7\v\s\m\4\2\9\5\j\f\m\n\j\8\r\k\p\i\e\p\s\l\p\l\i\k\i\j\7\u\d\s\v\i\i\w\u\3\2\o\m\j\q\i\q\v\b\y\m\k\a\1\x\k\0\n\a\k\r\f\z\k\j\4\a\d\q\n\i\q\q\e\w\u\k\5\d\8\p\c\2\n\w\0\l\m\c\m\e\x\m\f\s\l\g\7\8\m\z\1\0\6\z\b\u\5\5\8\9\x\l\m\9\b\6\s\u\b\8\s\g\j\h\e\f\o\6\d\o\z\i\e\c\o\q\c\y\a\o\8\5\q\b\t\l\z\u\5\x\t\k\p\j\r\k\u\n\1\z\2\i\e\v\5\e\b\l\l\u\u\b\5\y\8\x\t\k\w\f\r\h\2\n\9\1\e\k\m\1\s\c\8\b\w\a\s\5\n\6\6\d\b\a\k\d\6\n\x\k\n\j\i\m\k\f\j\o\j\8\h\l\3\b\6\8\k\v\j\7\i\5\5\t\a\u\h\t\c\9\o\e\s\1\x\o\l\o\x\q\u\h\8\m\4\j\z\c\5\k\c\u\t\s\0\g\m\t\6\l\y\h\k\c\h\u\m\t\1\m\o\4\8\l\x\m\y\n\x\a\e\u\3\0\4\5\z\q\5\l\h\t\r\r\3\u\a\m\p\g\y\r\4\k\q\o\0\k\3\v\a\d\3\x\e\5\f\g\5\a\i\n\k\b\0\2\m\f\o\j\5\7\v\7\v\i\q\s\q\r\7\k\4\j\2\u\j\l\w\0\m\m\o\8\b\g\8\q\l\u\6\m\6\v\n\b\y\n\6\6\b\s\5\1\5\r\3\y\p\0\u\f\v\c\h\i\c\v\3\4\m\b\m\p\v\n\o\h\8\1\6\o\n\f\f\o\0\p\j\o\k\e\a\8\p\5\h\h\2\p\d\4\n\a\d\7\1\g\c\t\o\p\a\3\b\f\1\v\b\k\4\h\s\8\j\a\k\9\s\n ]] 00:19:00.442 12:15:53 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:00.700 12:15:54 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:19:00.700 12:15:54 -- dd/uring.sh@75 -- # gen_conf 00:19:00.700 12:15:54 -- dd/common.sh@31 -- # xtrace_disable 00:19:00.700 12:15:54 -- common/autotest_common.sh@10 -- # set +x 00:19:00.961 [2024-04-26 12:15:54.216061] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:00.961 [2024-04-26 12:15:54.216210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64216 ] 00:19:00.961 { 00:19:00.961 "subsystems": [ 00:19:00.961 { 00:19:00.961 "subsystem": "bdev", 00:19:00.961 "config": [ 00:19:00.961 { 00:19:00.961 "params": { 00:19:00.961 "block_size": 512, 00:19:00.961 "num_blocks": 1048576, 00:19:00.961 "name": "malloc0" 00:19:00.961 }, 00:19:00.961 "method": "bdev_malloc_create" 00:19:00.961 }, 00:19:00.961 { 00:19:00.961 "params": { 00:19:00.961 "filename": "/dev/zram1", 00:19:00.961 "name": "uring0" 00:19:00.961 }, 00:19:00.961 "method": "bdev_uring_create" 00:19:00.961 }, 00:19:00.961 { 00:19:00.961 "method": "bdev_wait_for_examine" 00:19:00.961 } 00:19:00.961 ] 00:19:00.961 } 00:19:00.961 ] 00:19:00.961 } 00:19:00.961 [2024-04-26 12:15:54.355938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.219 [2024-04-26 12:15:54.486326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.353  Copying: 141/512 [MB] (141 MBps) Copying: 289/512 [MB] (148 MBps) Copying: 435/512 [MB] (145 MBps) Copying: 512/512 [MB] (average 144 MBps) 00:19:05.353 00:19:05.353 12:15:58 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:19:05.353 12:15:58 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:19:05.353 12:15:58 -- dd/uring.sh@87 -- # : 00:19:05.353 12:15:58 -- dd/uring.sh@87 -- # : 00:19:05.353 12:15:58 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:19:05.353 12:15:58 -- dd/uring.sh@87 -- # gen_conf 00:19:05.353 12:15:58 -- dd/common.sh@31 -- # xtrace_disable 00:19:05.353 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:05.353 [2024-04-26 12:15:58.803377] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:05.353 [2024-04-26 12:15:58.803469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64277 ] 00:19:05.353 { 00:19:05.353 "subsystems": [ 00:19:05.353 { 00:19:05.353 "subsystem": "bdev", 00:19:05.353 "config": [ 00:19:05.353 { 00:19:05.353 "params": { 00:19:05.353 "block_size": 512, 00:19:05.353 "num_blocks": 1048576, 00:19:05.353 "name": "malloc0" 00:19:05.353 }, 00:19:05.353 "method": "bdev_malloc_create" 00:19:05.353 }, 00:19:05.353 { 00:19:05.353 "params": { 00:19:05.353 "filename": "/dev/zram1", 00:19:05.353 "name": "uring0" 00:19:05.353 }, 00:19:05.353 "method": "bdev_uring_create" 00:19:05.353 }, 00:19:05.353 { 00:19:05.353 "params": { 00:19:05.353 "name": "uring0" 00:19:05.353 }, 00:19:05.353 "method": "bdev_uring_delete" 00:19:05.353 }, 00:19:05.353 { 00:19:05.353 "method": "bdev_wait_for_examine" 00:19:05.353 } 00:19:05.353 ] 00:19:05.353 } 00:19:05.353 ] 00:19:05.353 } 00:19:05.612 [2024-04-26 12:15:58.998926] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.870 [2024-04-26 12:15:59.088402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.387  Copying: 0/0 [B] (average 0 Bps) 00:19:06.387 00:19:06.387 12:15:59 -- dd/uring.sh@94 -- # gen_conf 00:19:06.387 12:15:59 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:06.387 12:15:59 -- dd/common.sh@31 -- # xtrace_disable 00:19:06.387 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:06.387 12:15:59 -- dd/uring.sh@94 -- # : 00:19:06.387 12:15:59 -- common/autotest_common.sh@638 -- # local es=0 00:19:06.387 12:15:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:06.387 12:15:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.387 12:15:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:06.387 12:15:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.387 12:15:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:06.387 12:15:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.387 12:15:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:06.387 12:15:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.387 12:15:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:06.387 12:15:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:19:06.387 [2024-04-26 12:15:59.854498] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:06.387 [2024-04-26 12:15:59.854587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64311 ] 00:19:06.646 { 00:19:06.646 "subsystems": [ 00:19:06.646 { 00:19:06.646 "subsystem": "bdev", 00:19:06.646 "config": [ 00:19:06.646 { 00:19:06.646 "params": { 00:19:06.646 "block_size": 512, 00:19:06.646 "num_blocks": 1048576, 00:19:06.646 "name": "malloc0" 00:19:06.646 }, 00:19:06.646 "method": "bdev_malloc_create" 00:19:06.646 }, 00:19:06.646 { 00:19:06.646 "params": { 00:19:06.646 "filename": "/dev/zram1", 00:19:06.646 "name": "uring0" 00:19:06.646 }, 00:19:06.646 "method": "bdev_uring_create" 00:19:06.646 }, 00:19:06.646 { 00:19:06.646 "params": { 00:19:06.646 "name": "uring0" 00:19:06.646 }, 00:19:06.646 "method": "bdev_uring_delete" 00:19:06.646 }, 00:19:06.646 { 00:19:06.646 "method": "bdev_wait_for_examine" 00:19:06.646 } 00:19:06.646 ] 00:19:06.646 } 00:19:06.646 ] 00:19:06.646 } 00:19:06.646 [2024-04-26 12:15:59.999072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.905 [2024-04-26 12:16:00.124437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.164 [2024-04-26 12:16:00.402277] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:19:07.164 [2024-04-26 12:16:00.402334] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:19:07.164 [2024-04-26 12:16:00.402346] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:19:07.164 [2024-04-26 12:16:00.402357] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.422 [2024-04-26 12:16:00.726563] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:07.422 12:16:00 -- common/autotest_common.sh@641 -- # es=237 00:19:07.422 12:16:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:07.422 12:16:00 -- common/autotest_common.sh@650 -- # es=109 00:19:07.422 12:16:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:19:07.422 12:16:00 -- common/autotest_common.sh@658 -- # es=1 00:19:07.422 12:16:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:07.422 12:16:00 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:19:07.422 12:16:00 -- dd/common.sh@172 -- # local id=1 00:19:07.422 12:16:00 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:19:07.422 12:16:00 -- dd/common.sh@176 -- # echo 1 00:19:07.422 12:16:00 -- dd/common.sh@177 -- # echo 1 00:19:07.422 12:16:00 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:07.681 00:19:07.681 real 0m15.976s 00:19:07.681 user 0m10.822s 00:19:07.681 sys 0m12.653s 00:19:07.681 12:16:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:07.681 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:07.681 ************************************ 00:19:07.681 END TEST dd_uring_copy 00:19:07.681 ************************************ 00:19:07.681 00:19:07.681 real 0m16.186s 00:19:07.681 user 0m10.898s 00:19:07.681 sys 0m12.773s 00:19:07.681 12:16:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:07.681 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:07.681 ************************************ 00:19:07.681 END TEST spdk_dd_uring 00:19:07.681 ************************************ 00:19:07.939 12:16:01 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:07.939 12:16:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:07.939 12:16:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.939 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:07.939 ************************************ 00:19:07.939 START TEST spdk_dd_sparse 00:19:07.939 ************************************ 00:19:07.939 12:16:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:07.939 * Looking for test storage... 00:19:07.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:07.939 12:16:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.939 12:16:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.939 12:16:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.939 12:16:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.939 12:16:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.939 12:16:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.939 12:16:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.939 12:16:01 -- paths/export.sh@5 -- # export PATH 00:19:07.939 12:16:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.939 12:16:01 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:19:07.939 12:16:01 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:19:07.939 12:16:01 -- dd/sparse.sh@110 -- # file1=file_zero1 00:19:07.940 12:16:01 -- dd/sparse.sh@111 -- # file2=file_zero2 00:19:07.940 12:16:01 -- dd/sparse.sh@112 -- # file3=file_zero3 00:19:07.940 12:16:01 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:19:07.940 12:16:01 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:19:07.940 12:16:01 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:19:07.940 12:16:01 -- dd/sparse.sh@118 -- # prepare 00:19:07.940 12:16:01 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:19:07.940 12:16:01 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:19:07.940 1+0 records in 00:19:07.940 1+0 records out 00:19:07.940 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00637577 s, 658 MB/s 00:19:07.940 12:16:01 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:19:07.940 1+0 records in 00:19:07.940 1+0 records out 00:19:07.940 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00712797 s, 588 MB/s 00:19:07.940 12:16:01 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:19:07.940 1+0 records in 00:19:07.940 1+0 records out 00:19:07.940 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00460793 s, 910 MB/s 00:19:07.940 12:16:01 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:19:07.940 12:16:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:07.940 12:16:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.940 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:08.198 ************************************ 00:19:08.198 START TEST dd_sparse_file_to_file 00:19:08.198 ************************************ 00:19:08.198 12:16:01 -- common/autotest_common.sh@1111 -- # file_to_file 00:19:08.198 12:16:01 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:19:08.198 12:16:01 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:19:08.198 12:16:01 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:08.198 12:16:01 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:19:08.198 12:16:01 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:19:08.198 12:16:01 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:19:08.198 12:16:01 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:19:08.198 12:16:01 -- dd/sparse.sh@41 -- # gen_conf 00:19:08.198 12:16:01 -- dd/common.sh@31 -- # xtrace_disable 00:19:08.198 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:08.198 [2024-04-26 12:16:01.512635] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:08.198 [2024-04-26 12:16:01.512732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64415 ] 00:19:08.198 { 00:19:08.198 "subsystems": [ 00:19:08.198 { 00:19:08.198 "subsystem": "bdev", 00:19:08.198 "config": [ 00:19:08.198 { 00:19:08.198 "params": { 00:19:08.198 "block_size": 4096, 00:19:08.198 "filename": "dd_sparse_aio_disk", 00:19:08.198 "name": "dd_aio" 00:19:08.198 }, 00:19:08.198 "method": "bdev_aio_create" 00:19:08.198 }, 00:19:08.198 { 00:19:08.198 "params": { 00:19:08.198 "lvs_name": "dd_lvstore", 00:19:08.198 "bdev_name": "dd_aio" 00:19:08.198 }, 00:19:08.198 "method": "bdev_lvol_create_lvstore" 00:19:08.198 }, 00:19:08.198 { 00:19:08.198 "method": "bdev_wait_for_examine" 00:19:08.198 } 00:19:08.198 ] 00:19:08.198 } 00:19:08.198 ] 00:19:08.198 } 00:19:08.198 [2024-04-26 12:16:01.654317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.455 [2024-04-26 12:16:01.780558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.972  Copying: 12/36 [MB] (average 923 MBps) 00:19:08.972 00:19:08.972 12:16:02 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:19:08.972 12:16:02 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:19:08.972 12:16:02 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:19:08.972 12:16:02 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:19:08.972 12:16:02 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:08.972 12:16:02 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:19:08.972 12:16:02 -- dd/sparse.sh@52 -- # stat1_b=24576 00:19:08.972 12:16:02 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:19:08.972 12:16:02 -- dd/sparse.sh@53 -- # stat2_b=24576 00:19:08.972 12:16:02 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:08.972 00:19:08.972 real 0m0.817s 00:19:08.972 user 0m0.529s 00:19:08.972 sys 0m0.392s 00:19:08.972 12:16:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:08.972 ************************************ 00:19:08.972 END TEST dd_sparse_file_to_file 00:19:08.972 ************************************ 00:19:08.972 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:19:08.972 12:16:02 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:19:08.972 12:16:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:08.972 12:16:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.972 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:19:08.972 ************************************ 00:19:08.972 START TEST dd_sparse_file_to_bdev 00:19:08.972 ************************************ 00:19:08.972 12:16:02 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:19:08.972 12:16:02 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:08.972 12:16:02 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:19:08.972 12:16:02 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:19:08.972 12:16:02 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:19:08.972 12:16:02 -- dd/sparse.sh@73 -- # gen_conf 00:19:08.972 12:16:02 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:19:08.972 12:16:02 -- dd/common.sh@31 -- # xtrace_disable 00:19:08.972 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:19:09.231 [2024-04-26 12:16:02.469686] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:09.231 [2024-04-26 12:16:02.469826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64468 ] 00:19:09.231 { 00:19:09.231 "subsystems": [ 00:19:09.231 { 00:19:09.231 "subsystem": "bdev", 00:19:09.231 "config": [ 00:19:09.231 { 00:19:09.231 "params": { 00:19:09.231 "block_size": 4096, 00:19:09.231 "filename": "dd_sparse_aio_disk", 00:19:09.231 "name": "dd_aio" 00:19:09.231 }, 00:19:09.231 "method": "bdev_aio_create" 00:19:09.231 }, 00:19:09.231 { 00:19:09.231 "params": { 00:19:09.231 "lvs_name": "dd_lvstore", 00:19:09.231 "lvol_name": "dd_lvol", 00:19:09.231 "size": 37748736, 00:19:09.231 "thin_provision": true 00:19:09.231 }, 00:19:09.231 "method": "bdev_lvol_create" 00:19:09.231 }, 00:19:09.231 { 00:19:09.231 "method": "bdev_wait_for_examine" 00:19:09.231 } 00:19:09.231 ] 00:19:09.231 } 00:19:09.231 ] 00:19:09.231 } 00:19:09.231 [2024-04-26 12:16:02.608976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.490 [2024-04-26 12:16:02.732621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.490 [2024-04-26 12:16:02.844632] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:19:09.490  Copying: 12/36 [MB] (average 521 MBps)[2024-04-26 12:16:02.887709] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:19:09.748 00:19:09.748 00:19:09.748 00:19:09.748 real 0m0.761s 00:19:09.748 user 0m0.537s 00:19:09.748 sys 0m0.367s 00:19:09.748 12:16:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:09.748 12:16:03 -- common/autotest_common.sh@10 -- # set +x 00:19:09.748 ************************************ 00:19:09.748 END TEST dd_sparse_file_to_bdev 00:19:09.748 ************************************ 00:19:09.748 12:16:03 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:19:09.748 12:16:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:09.748 12:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:09.748 12:16:03 -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 ************************************ 00:19:10.019 START TEST dd_sparse_bdev_to_file 00:19:10.019 ************************************ 00:19:10.019 12:16:03 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:19:10.019 12:16:03 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:19:10.019 12:16:03 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:19:10.019 12:16:03 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:19:10.019 12:16:03 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:19:10.019 12:16:03 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:19:10.019 12:16:03 -- dd/sparse.sh@91 -- # gen_conf 00:19:10.019 12:16:03 -- dd/common.sh@31 -- # xtrace_disable 00:19:10.019 12:16:03 -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 [2024-04-26 12:16:03.325482] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:10.019 [2024-04-26 12:16:03.325596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64510 ] 00:19:10.019 { 00:19:10.020 "subsystems": [ 00:19:10.020 { 00:19:10.020 "subsystem": "bdev", 00:19:10.020 "config": [ 00:19:10.020 { 00:19:10.020 "params": { 00:19:10.020 "block_size": 4096, 00:19:10.020 "filename": "dd_sparse_aio_disk", 00:19:10.020 "name": "dd_aio" 00:19:10.020 }, 00:19:10.020 "method": "bdev_aio_create" 00:19:10.020 }, 00:19:10.020 { 00:19:10.020 "method": "bdev_wait_for_examine" 00:19:10.020 } 00:19:10.020 ] 00:19:10.020 } 00:19:10.020 ] 00:19:10.020 } 00:19:10.020 [2024-04-26 12:16:03.460809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.279 [2024-04-26 12:16:03.572676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.537  Copying: 12/36 [MB] (average 1000 MBps) 00:19:10.537 00:19:10.537 12:16:03 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:19:10.537 12:16:03 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:19:10.537 12:16:03 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:19:10.537 12:16:03 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:19:10.537 12:16:03 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:10.537 12:16:03 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:19:10.537 12:16:03 -- dd/sparse.sh@102 -- # stat2_b=24576 00:19:10.537 12:16:03 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:19:10.537 12:16:03 -- dd/sparse.sh@103 -- # stat3_b=24576 00:19:10.537 12:16:03 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:10.537 00:19:10.537 real 0m0.727s 00:19:10.537 user 0m0.475s 00:19:10.537 sys 0m0.358s 00:19:10.537 12:16:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.537 ************************************ 00:19:10.537 END TEST dd_sparse_bdev_to_file 00:19:10.537 12:16:03 -- common/autotest_common.sh@10 -- # set +x 00:19:10.537 ************************************ 00:19:10.805 12:16:04 -- dd/sparse.sh@1 -- # cleanup 00:19:10.805 12:16:04 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:19:10.805 12:16:04 -- dd/sparse.sh@12 -- # rm file_zero1 00:19:10.805 12:16:04 -- dd/sparse.sh@13 -- # rm file_zero2 00:19:10.805 12:16:04 -- dd/sparse.sh@14 -- # rm file_zero3 00:19:10.805 ************************************ 00:19:10.805 END TEST spdk_dd_sparse 00:19:10.805 ************************************ 00:19:10.805 00:19:10.805 real 0m2.811s 00:19:10.805 user 0m1.703s 00:19:10.805 sys 0m1.408s 00:19:10.805 12:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.805 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:10.805 12:16:04 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:10.805 12:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:10.805 12:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.805 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:10.805 ************************************ 00:19:10.805 START TEST spdk_dd_negative 00:19:10.805 ************************************ 00:19:10.805 12:16:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:10.805 * Looking for test storage... 00:19:10.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:10.805 12:16:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.805 12:16:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.805 12:16:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.805 12:16:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.805 12:16:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.806 12:16:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.806 12:16:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.806 12:16:04 -- paths/export.sh@5 -- # export PATH 00:19:10.806 12:16:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.806 12:16:04 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:10.806 12:16:04 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:10.806 12:16:04 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:10.806 12:16:04 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:11.064 12:16:04 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:19:11.064 12:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.064 12:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.064 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.064 ************************************ 00:19:11.064 START TEST dd_invalid_arguments 00:19:11.064 ************************************ 00:19:11.064 12:16:04 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:19:11.064 12:16:04 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:11.064 12:16:04 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.064 12:16:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:11.064 12:16:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.064 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.064 12:16:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.064 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.064 12:16:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.064 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.064 12:16:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.064 12:16:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:11.064 12:16:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:11.064 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:19:11.064 00:19:11.064 CPU options: 00:19:11.064 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:19:11.064 (like [0,1,10]) 00:19:11.064 --lcores lcore to CPU mapping list. The list is in the format: 00:19:11.064 [<,lcores[@CPUs]>...] 00:19:11.064 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:19:11.064 Within the group, '-' is used for range separator, 00:19:11.064 ',' is used for single number separator. 00:19:11.064 '( )' can be omitted for single element group, 00:19:11.064 '@' can be omitted if cpus and lcores have the same value 00:19:11.064 --disable-cpumask-locks Disable CPU core lock files. 00:19:11.064 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:19:11.064 pollers in the app support interrupt mode) 00:19:11.064 -p, --main-core main (primary) core for DPDK 00:19:11.064 00:19:11.064 Configuration options: 00:19:11.064 -c, --config, --json JSON config file 00:19:11.064 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:19:11.064 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:19:11.064 --wait-for-rpc wait for RPCs to initialize subsystems 00:19:11.064 --rpcs-allowed comma-separated list of permitted RPCS 00:19:11.064 --json-ignore-init-errors don't exit on invalid config entry 00:19:11.064 00:19:11.064 Memory options: 00:19:11.064 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:19:11.064 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:19:11.064 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:19:11.064 -R, --huge-unlink unlink huge files after initialization 00:19:11.064 -n, --mem-channels number of memory channels used for DPDK 00:19:11.064 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:19:11.064 --msg-mempool-size global message memory pool size in count (default: 262143) 00:19:11.064 --no-huge run without using hugepages 00:19:11.064 -i, --shm-id shared memory ID (optional) 00:19:11.064 -g, --single-file-segments force creating just one hugetlbfs file 00:19:11.064 00:19:11.064 PCI options: 00:19:11.064 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:19:11.064 -B, --pci-blocked pci addr to block (can be used more than once) 00:19:11.064 -u, --no-pci disable PCI access 00:19:11.064 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:19:11.064 00:19:11.064 Log options: 00:19:11.064 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:19:11.064 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:19:11.064 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:19:11.064 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:19:11.064 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:19:11.064 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:19:11.064 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:19:11.064 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:19:11.064 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:19:11.064 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:19:11.064 virtio_vfio_user, vmd) 00:19:11.064 --silence-noticelog disable notice level logging to stderr 00:19:11.064 00:19:11.064 Trace options: 00:19:11.064 --num-trace-entries number of trace entries for each core, must be power of 2, 00:19:11.064 setting 0 to disable trace (default 32768) 00:19:11.064 Tracepoints vary in size and can use more than one trace entry. 00:19:11.064 -e, --tpoint-group [:] 00:19:11.064 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:19:11.064 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:19:11.064 [2024-04-26 12:16:04.398556] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:19:11.064 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:19:11.064 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:19:11.064 a tracepoint group. First tpoint inside a group can be enabled by 00:19:11.064 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:19:11.064 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:19:11.064 in /include/spdk_internal/trace_defs.h 00:19:11.064 00:19:11.064 Other options: 00:19:11.064 -h, --help show this usage 00:19:11.064 -v, --version print SPDK version 00:19:11.064 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:19:11.064 --env-context Opaque context for use of the env implementation 00:19:11.064 00:19:11.064 Application specific: 00:19:11.064 [--------- DD Options ---------] 00:19:11.064 --if Input file. Must specify either --if or --ib. 00:19:11.064 --ib Input bdev. Must specifier either --if or --ib 00:19:11.064 --of Output file. Must specify either --of or --ob. 00:19:11.064 --ob Output bdev. Must specify either --of or --ob. 00:19:11.064 --iflag Input file flags. 00:19:11.064 --oflag Output file flags. 00:19:11.064 --bs I/O unit size (default: 4096) 00:19:11.064 --qd Queue depth (default: 2) 00:19:11.064 --count I/O unit count. The number of I/O units to copy. (default: all) 00:19:11.064 --skip Skip this many I/O units at start of input. (default: 0) 00:19:11.064 --seek Skip this many I/O units at start of output. (default: 0) 00:19:11.064 --aio Force usage of AIO. (by default io_uring is used if available) 00:19:11.065 --sparse Enable hole skipping in input target 00:19:11.065 Available iflag and oflag values: 00:19:11.065 append - append mode 00:19:11.065 direct - use direct I/O for data 00:19:11.065 directory - fail unless a directory 00:19:11.065 dsync - use synchronized I/O for data 00:19:11.065 noatime - do not update access time 00:19:11.065 noctty - do not assign controlling terminal from file 00:19:11.065 nofollow - do not follow symlinks 00:19:11.065 nonblock - use non-blocking I/O 00:19:11.065 sync - use synchronized I/O for data and metadata 00:19:11.065 12:16:04 -- common/autotest_common.sh@641 -- # es=2 00:19:11.065 12:16:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:11.065 12:16:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:11.065 12:16:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:11.065 00:19:11.065 real 0m0.071s 00:19:11.065 user 0m0.046s 00:19:11.065 sys 0m0.023s 00:19:11.065 12:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.065 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.065 ************************************ 00:19:11.065 END TEST dd_invalid_arguments 00:19:11.065 ************************************ 00:19:11.065 12:16:04 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:19:11.065 12:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.065 12:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.065 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.065 ************************************ 00:19:11.065 START TEST dd_double_input 00:19:11.065 ************************************ 00:19:11.065 12:16:04 -- common/autotest_common.sh@1111 -- # double_input 00:19:11.065 12:16:04 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:11.065 12:16:04 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.065 12:16:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:11.065 12:16:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.065 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.065 12:16:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.323 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.323 12:16:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.323 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.323 12:16:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.323 12:16:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:11.323 12:16:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:11.323 [2024-04-26 12:16:04.573820] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:19:11.323 12:16:04 -- common/autotest_common.sh@641 -- # es=22 00:19:11.323 12:16:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:11.323 12:16:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:11.323 12:16:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:11.323 00:19:11.323 real 0m0.061s 00:19:11.323 user 0m0.036s 00:19:11.323 sys 0m0.024s 00:19:11.323 12:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.323 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.323 ************************************ 00:19:11.323 END TEST dd_double_input 00:19:11.323 ************************************ 00:19:11.323 12:16:04 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:19:11.324 12:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.324 12:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.324 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.324 ************************************ 00:19:11.324 START TEST dd_double_output 00:19:11.324 ************************************ 00:19:11.324 12:16:04 -- common/autotest_common.sh@1111 -- # double_output 00:19:11.324 12:16:04 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:11.324 12:16:04 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.324 12:16:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:11.324 12:16:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.324 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.324 12:16:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.324 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.324 12:16:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.324 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.324 12:16:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.324 12:16:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:11.324 12:16:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:11.324 [2024-04-26 12:16:04.758694] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:19:11.324 12:16:04 -- common/autotest_common.sh@641 -- # es=22 00:19:11.324 12:16:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:11.324 12:16:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:11.324 12:16:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:11.324 00:19:11.324 real 0m0.072s 00:19:11.324 user 0m0.041s 00:19:11.324 sys 0m0.030s 00:19:11.324 12:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.324 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.324 ************************************ 00:19:11.324 END TEST dd_double_output 00:19:11.324 ************************************ 00:19:11.582 12:16:04 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:19:11.582 12:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.582 12:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.582 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.582 ************************************ 00:19:11.582 START TEST dd_no_input 00:19:11.582 ************************************ 00:19:11.582 12:16:04 -- common/autotest_common.sh@1111 -- # no_input 00:19:11.582 12:16:04 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:11.582 12:16:04 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.582 12:16:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:11.582 12:16:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.582 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.582 12:16:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.582 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.582 12:16:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.582 12:16:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.582 12:16:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.582 12:16:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:11.582 12:16:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:11.582 [2024-04-26 12:16:04.935786] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:19:11.582 12:16:04 -- common/autotest_common.sh@641 -- # es=22 00:19:11.582 12:16:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:11.582 12:16:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:11.582 12:16:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:11.582 00:19:11.582 real 0m0.062s 00:19:11.582 user 0m0.039s 00:19:11.582 sys 0m0.022s 00:19:11.582 12:16:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.582 ************************************ 00:19:11.582 END TEST dd_no_input 00:19:11.582 ************************************ 00:19:11.582 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.582 12:16:04 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:19:11.582 12:16:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.582 12:16:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.582 12:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.840 ************************************ 00:19:11.840 START TEST dd_no_output 00:19:11.840 ************************************ 00:19:11.840 12:16:05 -- common/autotest_common.sh@1111 -- # no_output 00:19:11.840 12:16:05 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:11.840 12:16:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.840 12:16:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:11.840 12:16:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.840 12:16:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.840 12:16:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:11.840 12:16:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:11.840 [2024-04-26 12:16:05.122736] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:19:11.840 12:16:05 -- common/autotest_common.sh@641 -- # es=22 00:19:11.840 12:16:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:11.840 12:16:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:11.840 12:16:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:11.840 00:19:11.840 real 0m0.076s 00:19:11.840 user 0m0.047s 00:19:11.840 sys 0m0.028s 00:19:11.840 12:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.840 ************************************ 00:19:11.840 END TEST dd_no_output 00:19:11.840 ************************************ 00:19:11.840 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:19:11.840 12:16:05 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:19:11.840 12:16:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.840 12:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.840 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:19:11.840 ************************************ 00:19:11.840 START TEST dd_wrong_blocksize 00:19:11.840 ************************************ 00:19:11.840 12:16:05 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:19:11.840 12:16:05 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:11.840 12:16:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.840 12:16:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:11.840 12:16:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.840 12:16:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.840 12:16:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:11.840 12:16:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:11.840 12:16:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:11.840 [2024-04-26 12:16:05.307957] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:19:12.099 12:16:05 -- common/autotest_common.sh@641 -- # es=22 00:19:12.099 12:16:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:12.099 12:16:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:12.099 12:16:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:12.099 00:19:12.099 real 0m0.068s 00:19:12.099 user 0m0.051s 00:19:12.099 sys 0m0.017s 00:19:12.099 12:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.099 ************************************ 00:19:12.099 END TEST dd_wrong_blocksize 00:19:12.099 ************************************ 00:19:12.099 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 12:16:05 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:19:12.099 12:16:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:12.099 12:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:12.099 12:16:05 -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 ************************************ 00:19:12.099 START TEST dd_smaller_blocksize 00:19:12.099 ************************************ 00:19:12.099 12:16:05 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:19:12.099 12:16:05 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:12.099 12:16:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:12.099 12:16:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:12.099 12:16:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:12.099 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:12.099 12:16:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:12.099 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:12.099 12:16:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:12.099 12:16:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:12.099 12:16:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:12.099 12:16:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:12.099 12:16:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:12.099 [2024-04-26 12:16:05.493379] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:12.099 [2024-04-26 12:16:05.493501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64769 ] 00:19:12.357 [2024-04-26 12:16:05.634798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.357 [2024-04-26 12:16:05.760463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.922 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:19:12.922 [2024-04-26 12:16:06.137558] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:19:12.922 [2024-04-26 12:16:06.137650] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:12.922 [2024-04-26 12:16:06.254083] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:12.922 12:16:06 -- common/autotest_common.sh@641 -- # es=244 00:19:12.922 12:16:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:12.922 12:16:06 -- common/autotest_common.sh@650 -- # es=116 00:19:12.922 12:16:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:19:12.922 12:16:06 -- common/autotest_common.sh@658 -- # es=1 00:19:12.922 12:16:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:12.922 00:19:12.922 real 0m0.948s 00:19:12.922 user 0m0.476s 00:19:12.922 sys 0m0.363s 00:19:12.922 12:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.922 ************************************ 00:19:12.922 END TEST dd_smaller_blocksize 00:19:12.922 ************************************ 00:19:12.922 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.182 12:16:06 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:19:13.182 12:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:13.182 12:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.182 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.182 ************************************ 00:19:13.182 START TEST dd_invalid_count 00:19:13.182 ************************************ 00:19:13.182 12:16:06 -- common/autotest_common.sh@1111 -- # invalid_count 00:19:13.182 12:16:06 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:13.182 12:16:06 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.182 12:16:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:13.182 12:16:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.182 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.182 12:16:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.182 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.182 12:16:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.182 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.182 12:16:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.182 12:16:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:13.182 12:16:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:13.182 [2024-04-26 12:16:06.552146] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:19:13.182 12:16:06 -- common/autotest_common.sh@641 -- # es=22 00:19:13.182 12:16:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:13.182 12:16:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:13.182 12:16:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:13.182 00:19:13.182 real 0m0.074s 00:19:13.182 user 0m0.044s 00:19:13.182 sys 0m0.028s 00:19:13.182 12:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.182 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.182 ************************************ 00:19:13.182 END TEST dd_invalid_count 00:19:13.182 ************************************ 00:19:13.182 12:16:06 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:19:13.182 12:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:13.182 12:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.182 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.440 ************************************ 00:19:13.440 START TEST dd_invalid_oflag 00:19:13.440 ************************************ 00:19:13.440 12:16:06 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:19:13.440 12:16:06 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:13.440 12:16:06 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.440 12:16:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:13.440 12:16:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.440 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.440 12:16:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.440 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.440 12:16:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.440 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.440 12:16:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.440 12:16:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:13.440 12:16:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:13.440 [2024-04-26 12:16:06.730409] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:19:13.440 12:16:06 -- common/autotest_common.sh@641 -- # es=22 00:19:13.440 12:16:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:13.440 12:16:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:13.440 12:16:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:13.440 00:19:13.440 real 0m0.070s 00:19:13.440 user 0m0.051s 00:19:13.440 sys 0m0.017s 00:19:13.440 12:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.440 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.440 ************************************ 00:19:13.440 END TEST dd_invalid_oflag 00:19:13.440 ************************************ 00:19:13.441 12:16:06 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:19:13.441 12:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:13.441 12:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.441 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 ************************************ 00:19:13.441 START TEST dd_invalid_iflag 00:19:13.441 ************************************ 00:19:13.441 12:16:06 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:19:13.441 12:16:06 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:13.441 12:16:06 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.441 12:16:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:13.441 12:16:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.441 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.441 12:16:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.441 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.441 12:16:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.441 12:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.441 12:16:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.441 12:16:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:13.441 12:16:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:13.699 [2024-04-26 12:16:06.913577] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:19:13.699 12:16:06 -- common/autotest_common.sh@641 -- # es=22 00:19:13.699 12:16:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:13.699 12:16:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:13.699 12:16:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:13.699 00:19:13.699 real 0m0.072s 00:19:13.699 user 0m0.045s 00:19:13.699 sys 0m0.025s 00:19:13.699 12:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.699 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.699 ************************************ 00:19:13.699 END TEST dd_invalid_iflag 00:19:13.699 ************************************ 00:19:13.699 12:16:06 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:19:13.699 12:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:13.699 12:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.699 12:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.699 ************************************ 00:19:13.699 START TEST dd_unknown_flag 00:19:13.699 ************************************ 00:19:13.699 12:16:07 -- common/autotest_common.sh@1111 -- # unknown_flag 00:19:13.699 12:16:07 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:13.699 12:16:07 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.699 12:16:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:13.699 12:16:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.699 12:16:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.699 12:16:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.699 12:16:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.699 12:16:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.699 12:16:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.699 12:16:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.699 12:16:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:13.699 12:16:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:13.699 [2024-04-26 12:16:07.097519] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:13.699 [2024-04-26 12:16:07.097649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64884 ] 00:19:13.978 [2024-04-26 12:16:07.243909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.978 [2024-04-26 12:16:07.380431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.237 [2024-04-26 12:16:07.470851] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:19:14.237 [2024-04-26 12:16:07.470938] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.237 [2024-04-26 12:16:07.471005] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:19:14.237 [2024-04-26 12:16:07.471018] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.237 [2024-04-26 12:16:07.471260] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:19:14.237 [2024-04-26 12:16:07.471277] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.237 [2024-04-26 12:16:07.471333] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:19:14.237 [2024-04-26 12:16:07.471343] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:19:14.237 [2024-04-26 12:16:07.586495] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:14.237 12:16:07 -- common/autotest_common.sh@641 -- # es=234 00:19:14.237 12:16:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:14.237 12:16:07 -- common/autotest_common.sh@650 -- # es=106 00:19:14.237 12:16:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:19:14.237 12:16:07 -- common/autotest_common.sh@658 -- # es=1 00:19:14.237 12:16:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:14.237 00:19:14.237 real 0m0.659s 00:19:14.237 user 0m0.401s 00:19:14.237 sys 0m0.164s 00:19:14.237 12:16:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:14.237 ************************************ 00:19:14.237 END TEST dd_unknown_flag 00:19:14.237 ************************************ 00:19:14.237 12:16:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.497 12:16:07 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:19:14.497 12:16:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:14.497 12:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:14.497 12:16:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.497 ************************************ 00:19:14.497 START TEST dd_invalid_json 00:19:14.497 ************************************ 00:19:14.497 12:16:07 -- common/autotest_common.sh@1111 -- # invalid_json 00:19:14.497 12:16:07 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:14.497 12:16:07 -- dd/negative_dd.sh@95 -- # : 00:19:14.497 12:16:07 -- common/autotest_common.sh@638 -- # local es=0 00:19:14.497 12:16:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:14.497 12:16:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:14.497 12:16:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:14.497 12:16:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:14.497 12:16:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:14.497 12:16:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:14.497 12:16:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:14.497 12:16:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:14.497 12:16:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:14.497 12:16:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:14.497 [2024-04-26 12:16:07.869406] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:14.497 [2024-04-26 12:16:07.869502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64916 ] 00:19:14.755 [2024-04-26 12:16:08.003830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.755 [2024-04-26 12:16:08.100559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.755 [2024-04-26 12:16:08.100658] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:19:14.755 [2024-04-26 12:16:08.100674] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:14.755 [2024-04-26 12:16:08.100684] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.755 [2024-04-26 12:16:08.100721] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:19:14.755 12:16:08 -- common/autotest_common.sh@641 -- # es=234 00:19:14.755 12:16:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:14.755 12:16:08 -- common/autotest_common.sh@650 -- # es=106 00:19:14.755 12:16:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:19:14.755 12:16:08 -- common/autotest_common.sh@658 -- # es=1 00:19:14.755 12:16:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:14.755 00:19:14.755 real 0m0.402s 00:19:14.755 user 0m0.223s 00:19:14.755 sys 0m0.076s 00:19:14.755 12:16:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:14.755 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:14.755 ************************************ 00:19:14.755 END TEST dd_invalid_json 00:19:14.755 ************************************ 00:19:15.013 00:19:15.013 real 0m4.069s 00:19:15.013 user 0m1.996s 00:19:15.013 sys 0m1.549s 00:19:15.013 12:16:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:15.013 ************************************ 00:19:15.013 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.013 END TEST spdk_dd_negative 00:19:15.013 ************************************ 00:19:15.013 ************************************ 00:19:15.013 END TEST spdk_dd 00:19:15.013 ************************************ 00:19:15.013 00:19:15.013 real 1m24.708s 00:19:15.013 user 0m55.461s 00:19:15.013 sys 0m34.811s 00:19:15.013 12:16:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:15.013 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.013 12:16:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:15.013 12:16:08 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:19:15.013 12:16:08 -- spdk/autotest.sh@258 -- # timing_exit lib 00:19:15.013 12:16:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:15.013 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.013 12:16:08 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:19:15.013 12:16:08 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:19:15.014 12:16:08 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:19:15.014 12:16:08 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:19:15.014 12:16:08 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:19:15.014 12:16:08 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:19:15.014 12:16:08 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:15.014 12:16:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.014 12:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.014 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.014 ************************************ 00:19:15.014 START TEST nvmf_tcp 00:19:15.014 ************************************ 00:19:15.014 12:16:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:15.273 * Looking for test storage... 00:19:15.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@10 -- # uname -s 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.273 12:16:08 -- nvmf/common.sh@7 -- # uname -s 00:19:15.273 12:16:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.273 12:16:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.273 12:16:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.273 12:16:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.273 12:16:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.273 12:16:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.273 12:16:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.273 12:16:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.273 12:16:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.273 12:16:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.273 12:16:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:15.273 12:16:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:15.273 12:16:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.273 12:16:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.273 12:16:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.273 12:16:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.273 12:16:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.273 12:16:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.273 12:16:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.273 12:16:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.273 12:16:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- paths/export.sh@5 -- # export PATH 00:19:15.273 12:16:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- nvmf/common.sh@47 -- # : 0 00:19:15.273 12:16:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.273 12:16:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.273 12:16:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.273 12:16:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.273 12:16:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:19:15.273 12:16:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:15.273 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:19:15.273 12:16:08 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:15.273 12:16:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.273 12:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.273 12:16:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.273 ************************************ 00:19:15.273 START TEST nvmf_host_management 00:19:15.273 ************************************ 00:19:15.273 12:16:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:15.273 * Looking for test storage... 00:19:15.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:15.273 12:16:08 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.273 12:16:08 -- nvmf/common.sh@7 -- # uname -s 00:19:15.273 12:16:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.273 12:16:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.273 12:16:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.273 12:16:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.273 12:16:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.273 12:16:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.273 12:16:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.273 12:16:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.273 12:16:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.273 12:16:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.273 12:16:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:15.273 12:16:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:15.273 12:16:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.273 12:16:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.273 12:16:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.273 12:16:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.273 12:16:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.273 12:16:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.273 12:16:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.273 12:16:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.273 12:16:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- paths/export.sh@5 -- # export PATH 00:19:15.273 12:16:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.273 12:16:08 -- nvmf/common.sh@47 -- # : 0 00:19:15.273 12:16:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.273 12:16:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.273 12:16:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.273 12:16:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.273 12:16:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.273 12:16:08 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.273 12:16:08 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.273 12:16:08 -- target/host_management.sh@105 -- # nvmftestinit 00:19:15.273 12:16:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:15.273 12:16:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.273 12:16:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:15.273 12:16:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:15.273 12:16:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:15.273 12:16:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.273 12:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.273 12:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.532 12:16:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:15.532 12:16:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:15.532 12:16:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:15.532 12:16:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:15.532 12:16:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:15.532 12:16:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:15.532 12:16:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.532 12:16:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.532 12:16:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:15.532 12:16:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:15.532 12:16:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.532 12:16:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.532 12:16:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.532 12:16:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.532 12:16:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.532 12:16:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.532 12:16:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.532 12:16:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.532 12:16:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:15.532 Cannot find device "nvmf_init_br" 00:19:15.532 12:16:08 -- nvmf/common.sh@154 -- # true 00:19:15.532 12:16:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:15.532 Cannot find device "nvmf_tgt_br" 00:19:15.532 12:16:08 -- nvmf/common.sh@155 -- # true 00:19:15.532 12:16:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.532 Cannot find device "nvmf_tgt_br2" 00:19:15.532 12:16:08 -- nvmf/common.sh@156 -- # true 00:19:15.532 12:16:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:15.532 Cannot find device "nvmf_init_br" 00:19:15.532 12:16:08 -- nvmf/common.sh@157 -- # true 00:19:15.532 12:16:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:15.532 Cannot find device "nvmf_tgt_br" 00:19:15.532 12:16:08 -- nvmf/common.sh@158 -- # true 00:19:15.532 12:16:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:15.532 Cannot find device "nvmf_tgt_br2" 00:19:15.532 12:16:08 -- nvmf/common.sh@159 -- # true 00:19:15.532 12:16:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:15.532 Cannot find device "nvmf_br" 00:19:15.533 12:16:08 -- nvmf/common.sh@160 -- # true 00:19:15.533 12:16:08 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:15.533 Cannot find device "nvmf_init_if" 00:19:15.533 12:16:08 -- nvmf/common.sh@161 -- # true 00:19:15.533 12:16:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.533 12:16:08 -- nvmf/common.sh@162 -- # true 00:19:15.533 12:16:08 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.533 12:16:08 -- nvmf/common.sh@163 -- # true 00:19:15.533 12:16:08 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.533 12:16:08 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.533 12:16:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.533 12:16:08 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.533 12:16:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.533 12:16:08 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.533 12:16:08 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.533 12:16:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:15.533 12:16:08 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:15.533 12:16:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:15.533 12:16:08 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:15.533 12:16:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:15.533 12:16:08 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:15.533 12:16:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.533 12:16:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.533 12:16:08 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.533 12:16:08 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:15.792 12:16:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:15.792 12:16:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.792 12:16:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.792 12:16:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.792 12:16:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.792 12:16:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.792 12:16:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:15.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:19:15.792 00:19:15.792 --- 10.0.0.2 ping statistics --- 00:19:15.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.792 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:15.792 12:16:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:15.792 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.792 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:15.792 00:19:15.792 --- 10.0.0.3 ping statistics --- 00:19:15.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.792 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:15.792 12:16:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:15.792 00:19:15.792 --- 10.0.0.1 ping statistics --- 00:19:15.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.792 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:15.792 12:16:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.792 12:16:09 -- nvmf/common.sh@422 -- # return 0 00:19:15.792 12:16:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:15.792 12:16:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.792 12:16:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:15.792 12:16:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:15.792 12:16:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.792 12:16:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:15.792 12:16:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:15.792 12:16:09 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:19:15.792 12:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:15.792 12:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.792 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:19:15.792 ************************************ 00:19:15.792 START TEST nvmf_host_management 00:19:15.792 ************************************ 00:19:15.792 12:16:09 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:19:15.792 12:16:09 -- target/host_management.sh@69 -- # starttarget 00:19:15.792 12:16:09 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:15.792 12:16:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:15.792 12:16:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:15.792 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:19:15.792 12:16:09 -- nvmf/common.sh@470 -- # nvmfpid=65196 00:19:15.792 12:16:09 -- nvmf/common.sh@471 -- # waitforlisten 65196 00:19:15.792 12:16:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:15.792 12:16:09 -- common/autotest_common.sh@817 -- # '[' -z 65196 ']' 00:19:15.792 12:16:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.792 12:16:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:15.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.792 12:16:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.792 12:16:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:15.792 12:16:09 -- common/autotest_common.sh@10 -- # set +x 00:19:15.792 [2024-04-26 12:16:09.243082] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:15.792 [2024-04-26 12:16:09.243230] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.051 [2024-04-26 12:16:09.383206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.051 [2024-04-26 12:16:09.505586] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.051 [2024-04-26 12:16:09.505674] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.051 [2024-04-26 12:16:09.505688] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.051 [2024-04-26 12:16:09.505698] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.051 [2024-04-26 12:16:09.505707] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.051 [2024-04-26 12:16:09.505868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.051 [2024-04-26 12:16:09.506004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.051 [2024-04-26 12:16:09.506641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:16.051 [2024-04-26 12:16:09.506654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.985 12:16:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.985 12:16:10 -- common/autotest_common.sh@850 -- # return 0 00:19:16.985 12:16:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:16.985 12:16:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:16.985 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.985 12:16:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.985 12:16:10 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.985 12:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.985 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.985 [2024-04-26 12:16:10.297162] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.985 12:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.985 12:16:10 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:16.985 12:16:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:16.985 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.985 12:16:10 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:16.985 12:16:10 -- target/host_management.sh@23 -- # cat 00:19:16.985 12:16:10 -- target/host_management.sh@30 -- # rpc_cmd 00:19:16.985 12:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.985 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.985 Malloc0 00:19:16.985 [2024-04-26 12:16:10.377032] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.985 12:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.985 12:16:10 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:16.985 12:16:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:16.985 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.985 12:16:10 -- target/host_management.sh@73 -- # perfpid=65250 00:19:16.985 12:16:10 -- target/host_management.sh@74 -- # waitforlisten 65250 /var/tmp/bdevperf.sock 00:19:16.985 12:16:10 -- common/autotest_common.sh@817 -- # '[' -z 65250 ']' 00:19:16.985 12:16:10 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:16.985 12:16:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.985 12:16:10 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:16.985 12:16:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.985 12:16:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.985 12:16:10 -- nvmf/common.sh@521 -- # config=() 00:19:16.985 12:16:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.985 12:16:10 -- nvmf/common.sh@521 -- # local subsystem config 00:19:16.985 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:19:16.985 12:16:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:16.985 12:16:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:16.985 { 00:19:16.985 "params": { 00:19:16.985 "name": "Nvme$subsystem", 00:19:16.985 "trtype": "$TEST_TRANSPORT", 00:19:16.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.985 "adrfam": "ipv4", 00:19:16.985 "trsvcid": "$NVMF_PORT", 00:19:16.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.985 "hdgst": ${hdgst:-false}, 00:19:16.985 "ddgst": ${ddgst:-false} 00:19:16.985 }, 00:19:16.985 "method": "bdev_nvme_attach_controller" 00:19:16.985 } 00:19:16.985 EOF 00:19:16.985 )") 00:19:16.985 12:16:10 -- nvmf/common.sh@543 -- # cat 00:19:16.985 12:16:10 -- nvmf/common.sh@545 -- # jq . 00:19:16.985 12:16:10 -- nvmf/common.sh@546 -- # IFS=, 00:19:16.985 12:16:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:16.985 "params": { 00:19:16.985 "name": "Nvme0", 00:19:16.985 "trtype": "tcp", 00:19:16.985 "traddr": "10.0.0.2", 00:19:16.985 "adrfam": "ipv4", 00:19:16.985 "trsvcid": "4420", 00:19:16.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:16.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:16.985 "hdgst": false, 00:19:16.985 "ddgst": false 00:19:16.985 }, 00:19:16.985 "method": "bdev_nvme_attach_controller" 00:19:16.985 }' 00:19:17.277 [2024-04-26 12:16:10.480065] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:17.277 [2024-04-26 12:16:10.480154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65250 ] 00:19:17.277 [2024-04-26 12:16:10.627237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.535 [2024-04-26 12:16:10.750047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.535 Running I/O for 10 seconds... 00:19:18.103 12:16:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.103 12:16:11 -- common/autotest_common.sh@850 -- # return 0 00:19:18.103 12:16:11 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:18.103 12:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.103 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.363 12:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.363 12:16:11 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.363 12:16:11 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:18.363 12:16:11 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:18.363 12:16:11 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:18.363 12:16:11 -- target/host_management.sh@52 -- # local ret=1 00:19:18.363 12:16:11 -- target/host_management.sh@53 -- # local i 00:19:18.363 12:16:11 -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:18.363 12:16:11 -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:18.363 12:16:11 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:18.363 12:16:11 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:18.363 12:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.363 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.363 12:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.363 12:16:11 -- target/host_management.sh@55 -- # read_io_count=963 00:19:18.363 12:16:11 -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:19:18.363 12:16:11 -- target/host_management.sh@59 -- # ret=0 00:19:18.363 12:16:11 -- target/host_management.sh@60 -- # break 00:19:18.363 12:16:11 -- target/host_management.sh@64 -- # return 0 00:19:18.363 12:16:11 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:18.363 12:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.363 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.363 [2024-04-26 12:16:11.639374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.639881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.639992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.640961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.363 [2024-04-26 12:16:11.641887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.641959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 12:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.364 [2024-04-26 12:16:11.642685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-26 12:16:11.642835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:18.364 the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.642860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.642875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.364 [2024-04-26 12:16:11.642886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.642896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.364 [2024-04-26 12:16:11.642905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 12:16:11 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:18.364 [2024-04-26 12:16:11.642915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.364 [2024-04-26 12:16:11.642925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.642934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a671b0 is same with the state(5) to be set 00:19:18.364 12:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.364 [2024-04-26 12:16:11.643459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 12:16:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.364 [2024-04-26 12:16:11.643560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.643994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c6a0 is same with the state(5) to be set 00:19:18.364 [2024-04-26 12:16:11.644757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.644983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.644993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.364 [2024-04-26 12:16:11.645184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.364 [2024-04-26 12:16:11.645204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.645986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.645998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.646007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.646024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.646033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.646045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.646055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.646067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.646076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.365 [2024-04-26 12:16:11.646088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.365 [2024-04-26 12:16:11.646097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.366 [2024-04-26 12:16:11.646109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.366 [2024-04-26 12:16:11.646119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.366 [2024-04-26 12:16:11.646131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.366 [2024-04-26 12:16:11.646140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.366 [2024-04-26 12:16:11.646152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.366 [2024-04-26 12:16:11.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.366 [2024-04-26 12:16:11.646184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.366 [2024-04-26 12:16:11.646195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.366 [2024-04-26 12:16:11.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.366 [2024-04-26 12:16:11.646226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.366 [2024-04-26 12:16:11.646236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8cb40 is same with the state(5) to be set 00:19:18.366 [2024-04-26 12:16:11.646311] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8cb40 was disconnected and freed. reset controller. 00:19:18.366 [2024-04-26 12:16:11.647478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.366 task offset: 0 on job bdev=Nvme0n1 fails 00:19:18.366 00:19:18.366 Latency(us) 00:19:18.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.366 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.366 Job: Nvme0n1 ended in about 0.72 seconds with error 00:19:18.366 Verification LBA range: start 0x0 length 0x400 00:19:18.366 Nvme0n1 : 0.72 1428.14 89.26 89.26 0.00 41175.33 7536.64 38844.97 00:19:18.366 =================================================================================================================== 00:19:18.366 Total : 1428.14 89.26 89.26 0.00 41175.33 7536.64 38844.97 00:19:18.366 [2024-04-26 12:16:11.649937] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:18.366 [2024-04-26 12:16:11.649966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a671b0 (9): Bad file descriptor 00:19:18.366 12:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.366 12:16:11 -- target/host_management.sh@87 -- # sleep 1 00:19:18.366 [2024-04-26 12:16:11.659432] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:19.302 12:16:12 -- target/host_management.sh@91 -- # kill -9 65250 00:19:19.302 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65250) - No such process 00:19:19.302 12:16:12 -- target/host_management.sh@91 -- # true 00:19:19.302 12:16:12 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:19.302 12:16:12 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:19.302 12:16:12 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:19.302 12:16:12 -- nvmf/common.sh@521 -- # config=() 00:19:19.302 12:16:12 -- nvmf/common.sh@521 -- # local subsystem config 00:19:19.302 12:16:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:19.302 12:16:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:19.302 { 00:19:19.302 "params": { 00:19:19.302 "name": "Nvme$subsystem", 00:19:19.302 "trtype": "$TEST_TRANSPORT", 00:19:19.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.302 "adrfam": "ipv4", 00:19:19.302 "trsvcid": "$NVMF_PORT", 00:19:19.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.302 "hdgst": ${hdgst:-false}, 00:19:19.302 "ddgst": ${ddgst:-false} 00:19:19.302 }, 00:19:19.302 "method": "bdev_nvme_attach_controller" 00:19:19.302 } 00:19:19.302 EOF 00:19:19.302 )") 00:19:19.302 12:16:12 -- nvmf/common.sh@543 -- # cat 00:19:19.302 12:16:12 -- nvmf/common.sh@545 -- # jq . 00:19:19.302 12:16:12 -- nvmf/common.sh@546 -- # IFS=, 00:19:19.303 12:16:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:19.303 "params": { 00:19:19.303 "name": "Nvme0", 00:19:19.303 "trtype": "tcp", 00:19:19.303 "traddr": "10.0.0.2", 00:19:19.303 "adrfam": "ipv4", 00:19:19.303 "trsvcid": "4420", 00:19:19.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:19.303 "hdgst": false, 00:19:19.303 "ddgst": false 00:19:19.303 }, 00:19:19.303 "method": "bdev_nvme_attach_controller" 00:19:19.303 }' 00:19:19.303 [2024-04-26 12:16:12.707071] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:19.303 [2024-04-26 12:16:12.707198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65288 ] 00:19:19.561 [2024-04-26 12:16:12.843880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.561 [2024-04-26 12:16:12.969982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.819 Running I/O for 1 seconds... 00:19:20.810 00:19:20.810 Latency(us) 00:19:20.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.810 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:20.810 Verification LBA range: start 0x0 length 0x400 00:19:20.810 Nvme0n1 : 1.04 1472.51 92.03 0.00 0.00 42632.56 4438.57 38368.35 00:19:20.810 =================================================================================================================== 00:19:20.810 Total : 1472.51 92.03 0.00 0.00 42632.56 4438.57 38368.35 00:19:21.068 12:16:14 -- target/host_management.sh@102 -- # stoptarget 00:19:21.068 12:16:14 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:21.068 12:16:14 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:19:21.068 12:16:14 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:21.068 12:16:14 -- target/host_management.sh@40 -- # nvmftestfini 00:19:21.068 12:16:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:21.068 12:16:14 -- nvmf/common.sh@117 -- # sync 00:19:21.327 12:16:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.327 12:16:14 -- nvmf/common.sh@120 -- # set +e 00:19:21.327 12:16:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.327 12:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.327 rmmod nvme_tcp 00:19:21.327 rmmod nvme_fabrics 00:19:21.327 rmmod nvme_keyring 00:19:21.327 12:16:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.327 12:16:14 -- nvmf/common.sh@124 -- # set -e 00:19:21.328 12:16:14 -- nvmf/common.sh@125 -- # return 0 00:19:21.328 12:16:14 -- nvmf/common.sh@478 -- # '[' -n 65196 ']' 00:19:21.328 12:16:14 -- nvmf/common.sh@479 -- # killprocess 65196 00:19:21.328 12:16:14 -- common/autotest_common.sh@936 -- # '[' -z 65196 ']' 00:19:21.328 12:16:14 -- common/autotest_common.sh@940 -- # kill -0 65196 00:19:21.328 12:16:14 -- common/autotest_common.sh@941 -- # uname 00:19:21.328 12:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.328 12:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65196 00:19:21.328 12:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:21.328 killing process with pid 65196 00:19:21.328 12:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:21.328 12:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65196' 00:19:21.328 12:16:14 -- common/autotest_common.sh@955 -- # kill 65196 00:19:21.328 12:16:14 -- common/autotest_common.sh@960 -- # wait 65196 00:19:21.585 [2024-04-26 12:16:14.854649] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:21.585 12:16:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:21.585 12:16:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:21.585 12:16:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:21.585 12:16:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.585 12:16:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.585 12:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.585 12:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.585 12:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.585 12:16:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:21.585 00:19:21.585 real 0m5.736s 00:19:21.585 user 0m24.135s 00:19:21.585 sys 0m1.358s 00:19:21.585 12:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.585 ************************************ 00:19:21.585 END TEST nvmf_host_management 00:19:21.585 ************************************ 00:19:21.585 12:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:21.585 12:16:14 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:21.585 00:19:21.585 real 0m6.326s 00:19:21.585 user 0m24.278s 00:19:21.585 sys 0m1.615s 00:19:21.585 12:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.585 ************************************ 00:19:21.585 END TEST nvmf_host_management 00:19:21.586 12:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:21.586 ************************************ 00:19:21.586 12:16:14 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:21.586 12:16:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:21.586 12:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.586 12:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:21.845 ************************************ 00:19:21.845 START TEST nvmf_lvol 00:19:21.845 ************************************ 00:19:21.845 12:16:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:21.845 * Looking for test storage... 00:19:21.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.846 12:16:15 -- nvmf/common.sh@7 -- # uname -s 00:19:21.846 12:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.846 12:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.846 12:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.846 12:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.846 12:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.846 12:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.846 12:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.846 12:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.846 12:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.846 12:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.846 12:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:21.846 12:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:21.846 12:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.846 12:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.846 12:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.846 12:16:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.846 12:16:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.846 12:16:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.846 12:16:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.846 12:16:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.846 12:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.846 12:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.846 12:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.846 12:16:15 -- paths/export.sh@5 -- # export PATH 00:19:21.846 12:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.846 12:16:15 -- nvmf/common.sh@47 -- # : 0 00:19:21.846 12:16:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.846 12:16:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.846 12:16:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.846 12:16:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.846 12:16:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.846 12:16:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.846 12:16:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.846 12:16:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.846 12:16:15 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:21.846 12:16:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:21.846 12:16:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.846 12:16:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:21.846 12:16:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:21.846 12:16:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:21.846 12:16:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.846 12:16:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.846 12:16:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.846 12:16:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:21.846 12:16:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:21.846 12:16:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:21.846 12:16:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:21.846 12:16:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:21.846 12:16:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:21.846 12:16:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.846 12:16:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.846 12:16:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:21.846 12:16:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:21.846 12:16:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.846 12:16:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.846 12:16:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.846 12:16:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.846 12:16:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.846 12:16:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.846 12:16:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.846 12:16:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.846 12:16:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:21.846 12:16:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:21.846 Cannot find device "nvmf_tgt_br" 00:19:21.846 12:16:15 -- nvmf/common.sh@155 -- # true 00:19:21.846 12:16:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.846 Cannot find device "nvmf_tgt_br2" 00:19:21.846 12:16:15 -- nvmf/common.sh@156 -- # true 00:19:21.846 12:16:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:21.846 12:16:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:21.846 Cannot find device "nvmf_tgt_br" 00:19:21.846 12:16:15 -- nvmf/common.sh@158 -- # true 00:19:21.846 12:16:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:21.846 Cannot find device "nvmf_tgt_br2" 00:19:21.846 12:16:15 -- nvmf/common.sh@159 -- # true 00:19:21.846 12:16:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:21.846 12:16:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:21.846 12:16:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.846 12:16:15 -- nvmf/common.sh@162 -- # true 00:19:21.846 12:16:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.105 12:16:15 -- nvmf/common.sh@163 -- # true 00:19:22.105 12:16:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.105 12:16:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.105 12:16:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.105 12:16:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.105 12:16:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.105 12:16:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.105 12:16:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.105 12:16:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:22.105 12:16:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:22.105 12:16:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:22.105 12:16:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:22.105 12:16:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:22.105 12:16:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:22.105 12:16:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.105 12:16:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.105 12:16:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.105 12:16:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:22.105 12:16:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:22.105 12:16:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.105 12:16:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.105 12:16:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.105 12:16:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.105 12:16:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.105 12:16:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:22.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:22.105 00:19:22.105 --- 10.0.0.2 ping statistics --- 00:19:22.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.105 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:22.105 12:16:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:22.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:19:22.105 00:19:22.105 --- 10.0.0.3 ping statistics --- 00:19:22.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.105 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:22.105 12:16:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:22.105 00:19:22.105 --- 10.0.0.1 ping statistics --- 00:19:22.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.105 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:22.105 12:16:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.105 12:16:15 -- nvmf/common.sh@422 -- # return 0 00:19:22.105 12:16:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:22.105 12:16:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.105 12:16:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:22.105 12:16:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:22.105 12:16:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.105 12:16:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:22.105 12:16:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:22.105 12:16:15 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:22.105 12:16:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:22.105 12:16:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.105 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 12:16:15 -- nvmf/common.sh@470 -- # nvmfpid=65526 00:19:22.105 12:16:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:22.105 12:16:15 -- nvmf/common.sh@471 -- # waitforlisten 65526 00:19:22.105 12:16:15 -- common/autotest_common.sh@817 -- # '[' -z 65526 ']' 00:19:22.105 12:16:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.105 12:16:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.105 12:16:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.105 12:16:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.105 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:22.363 [2024-04-26 12:16:15.607512] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:22.363 [2024-04-26 12:16:15.607634] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.363 [2024-04-26 12:16:15.749161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.622 [2024-04-26 12:16:15.864928] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.622 [2024-04-26 12:16:15.865013] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.622 [2024-04-26 12:16:15.865042] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.622 [2024-04-26 12:16:15.865053] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.622 [2024-04-26 12:16:15.865061] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.622 [2024-04-26 12:16:15.865769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.622 [2024-04-26 12:16:15.865924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.622 [2024-04-26 12:16:15.865937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.189 12:16:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:23.189 12:16:16 -- common/autotest_common.sh@850 -- # return 0 00:19:23.189 12:16:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:23.189 12:16:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:23.189 12:16:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.189 12:16:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.189 12:16:16 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:23.448 [2024-04-26 12:16:16.894574] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.706 12:16:16 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:23.965 12:16:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:23.965 12:16:17 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:24.224 12:16:17 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:24.224 12:16:17 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:24.481 12:16:17 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:24.739 12:16:18 -- target/nvmf_lvol.sh@29 -- # lvs=f7f28d96-5e25-454d-9c0b-8fbcaf11b3e1 00:19:24.739 12:16:18 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f7f28d96-5e25-454d-9c0b-8fbcaf11b3e1 lvol 20 00:19:24.997 12:16:18 -- target/nvmf_lvol.sh@32 -- # lvol=49714c5c-5efd-4174-8449-5169e6879223 00:19:24.997 12:16:18 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:25.255 12:16:18 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49714c5c-5efd-4174-8449-5169e6879223 00:19:25.512 12:16:18 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:25.770 [2024-04-26 12:16:19.005606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.770 12:16:19 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:26.028 12:16:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=65600 00:19:26.028 12:16:19 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:26.028 12:16:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:26.961 12:16:20 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 49714c5c-5efd-4174-8449-5169e6879223 MY_SNAPSHOT 00:19:27.224 12:16:20 -- target/nvmf_lvol.sh@47 -- # snapshot=a183dc3f-f2b2-4abe-aad9-a865a3610dba 00:19:27.224 12:16:20 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 49714c5c-5efd-4174-8449-5169e6879223 30 00:19:27.495 12:16:20 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a183dc3f-f2b2-4abe-aad9-a865a3610dba MY_CLONE 00:19:27.754 12:16:21 -- target/nvmf_lvol.sh@49 -- # clone=8ca283d1-96ad-4f9b-9773-7fba464f5bb6 00:19:27.754 12:16:21 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8ca283d1-96ad-4f9b-9773-7fba464f5bb6 00:19:28.320 12:16:21 -- target/nvmf_lvol.sh@53 -- # wait 65600 00:19:36.431 Initializing NVMe Controllers 00:19:36.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:36.431 Controller IO queue size 128, less than required. 00:19:36.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:36.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:36.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:36.431 Initialization complete. Launching workers. 00:19:36.431 ======================================================== 00:19:36.431 Latency(us) 00:19:36.431 Device Information : IOPS MiB/s Average min max 00:19:36.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9708.48 37.92 13200.09 1995.43 78351.58 00:19:36.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9331.52 36.45 13733.05 2335.94 69504.29 00:19:36.431 ======================================================== 00:19:36.431 Total : 19040.00 74.38 13461.30 1995.43 78351.58 00:19:36.431 00:19:36.431 12:16:29 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:36.431 12:16:29 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 49714c5c-5efd-4174-8449-5169e6879223 00:19:36.691 12:16:30 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7f28d96-5e25-454d-9c0b-8fbcaf11b3e1 00:19:36.949 12:16:30 -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:36.949 12:16:30 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:36.949 12:16:30 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:36.949 12:16:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:36.949 12:16:30 -- nvmf/common.sh@117 -- # sync 00:19:36.949 12:16:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.949 12:16:30 -- nvmf/common.sh@120 -- # set +e 00:19:36.949 12:16:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.949 12:16:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.949 rmmod nvme_tcp 00:19:36.949 rmmod nvme_fabrics 00:19:36.949 rmmod nvme_keyring 00:19:36.949 12:16:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.949 12:16:30 -- nvmf/common.sh@124 -- # set -e 00:19:36.949 12:16:30 -- nvmf/common.sh@125 -- # return 0 00:19:36.949 12:16:30 -- nvmf/common.sh@478 -- # '[' -n 65526 ']' 00:19:36.949 12:16:30 -- nvmf/common.sh@479 -- # killprocess 65526 00:19:36.949 12:16:30 -- common/autotest_common.sh@936 -- # '[' -z 65526 ']' 00:19:36.949 12:16:30 -- common/autotest_common.sh@940 -- # kill -0 65526 00:19:36.949 12:16:30 -- common/autotest_common.sh@941 -- # uname 00:19:36.949 12:16:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:36.949 12:16:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65526 00:19:36.949 12:16:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:36.949 12:16:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:36.949 killing process with pid 65526 00:19:36.949 12:16:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65526' 00:19:36.949 12:16:30 -- common/autotest_common.sh@955 -- # kill 65526 00:19:36.949 12:16:30 -- common/autotest_common.sh@960 -- # wait 65526 00:19:37.517 12:16:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:37.517 12:16:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:37.517 12:16:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:37.517 12:16:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.517 12:16:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.517 12:16:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.517 12:16:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.517 12:16:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.517 12:16:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:37.517 00:19:37.517 real 0m15.687s 00:19:37.517 user 1m4.540s 00:19:37.517 sys 0m4.781s 00:19:37.517 12:16:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:37.517 12:16:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.517 ************************************ 00:19:37.517 END TEST nvmf_lvol 00:19:37.517 ************************************ 00:19:37.517 12:16:30 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:37.517 12:16:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:37.517 12:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.517 12:16:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.517 ************************************ 00:19:37.517 START TEST nvmf_lvs_grow 00:19:37.517 ************************************ 00:19:37.517 12:16:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:37.517 * Looking for test storage... 00:19:37.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:37.517 12:16:30 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.517 12:16:30 -- nvmf/common.sh@7 -- # uname -s 00:19:37.517 12:16:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.517 12:16:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.517 12:16:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.517 12:16:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.517 12:16:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.517 12:16:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.517 12:16:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.517 12:16:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.517 12:16:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.517 12:16:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.517 12:16:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:37.517 12:16:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:19:37.517 12:16:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.517 12:16:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.517 12:16:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.517 12:16:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.517 12:16:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.517 12:16:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.517 12:16:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.517 12:16:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.517 12:16:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.517 12:16:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.517 12:16:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.517 12:16:30 -- paths/export.sh@5 -- # export PATH 00:19:37.517 12:16:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.517 12:16:30 -- nvmf/common.sh@47 -- # : 0 00:19:37.517 12:16:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.517 12:16:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.517 12:16:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.517 12:16:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.517 12:16:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.517 12:16:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.517 12:16:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.517 12:16:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.517 12:16:30 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.517 12:16:30 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.517 12:16:30 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:19:37.517 12:16:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:37.517 12:16:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.517 12:16:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:37.517 12:16:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:37.517 12:16:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:37.517 12:16:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.518 12:16:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.518 12:16:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.518 12:16:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:37.518 12:16:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:37.518 12:16:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:37.518 12:16:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:37.518 12:16:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:37.518 12:16:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:37.518 12:16:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.518 12:16:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.518 12:16:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:37.518 12:16:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:37.518 12:16:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.518 12:16:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.518 12:16:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.518 12:16:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.518 12:16:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.518 12:16:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.518 12:16:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.518 12:16:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.518 12:16:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:37.776 12:16:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:37.776 Cannot find device "nvmf_tgt_br" 00:19:37.776 12:16:31 -- nvmf/common.sh@155 -- # true 00:19:37.776 12:16:31 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.776 Cannot find device "nvmf_tgt_br2" 00:19:37.776 12:16:31 -- nvmf/common.sh@156 -- # true 00:19:37.776 12:16:31 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:37.776 12:16:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:37.777 Cannot find device "nvmf_tgt_br" 00:19:37.777 12:16:31 -- nvmf/common.sh@158 -- # true 00:19:37.777 12:16:31 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:37.777 Cannot find device "nvmf_tgt_br2" 00:19:37.777 12:16:31 -- nvmf/common.sh@159 -- # true 00:19:37.777 12:16:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:37.777 12:16:31 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:37.777 12:16:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.777 12:16:31 -- nvmf/common.sh@162 -- # true 00:19:37.777 12:16:31 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.777 12:16:31 -- nvmf/common.sh@163 -- # true 00:19:37.777 12:16:31 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.777 12:16:31 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.777 12:16:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.777 12:16:31 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.777 12:16:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.777 12:16:31 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.777 12:16:31 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.777 12:16:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:37.777 12:16:31 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:37.777 12:16:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:37.777 12:16:31 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:37.777 12:16:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:37.777 12:16:31 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:37.777 12:16:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.777 12:16:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.777 12:16:31 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.036 12:16:31 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:38.036 12:16:31 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:38.036 12:16:31 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.036 12:16:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.036 12:16:31 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.036 12:16:31 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.036 12:16:31 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.036 12:16:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:38.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:38.036 00:19:38.036 --- 10.0.0.2 ping statistics --- 00:19:38.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.036 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:38.036 12:16:31 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:38.036 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.036 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:38.036 00:19:38.036 --- 10.0.0.3 ping statistics --- 00:19:38.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.036 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:38.036 12:16:31 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:38.036 00:19:38.036 --- 10.0.0.1 ping statistics --- 00:19:38.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.036 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:38.036 12:16:31 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.036 12:16:31 -- nvmf/common.sh@422 -- # return 0 00:19:38.036 12:16:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:38.036 12:16:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.036 12:16:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:38.036 12:16:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:38.036 12:16:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.036 12:16:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:38.036 12:16:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:38.036 12:16:31 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:19:38.036 12:16:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:38.036 12:16:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:38.036 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:19:38.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.036 12:16:31 -- nvmf/common.sh@470 -- # nvmfpid=65931 00:19:38.036 12:16:31 -- nvmf/common.sh@471 -- # waitforlisten 65931 00:19:38.036 12:16:31 -- common/autotest_common.sh@817 -- # '[' -z 65931 ']' 00:19:38.036 12:16:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.036 12:16:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:38.036 12:16:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.036 12:16:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.036 12:16:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.036 12:16:31 -- common/autotest_common.sh@10 -- # set +x 00:19:38.036 [2024-04-26 12:16:31.381085] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:38.036 [2024-04-26 12:16:31.381194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.296 [2024-04-26 12:16:31.514059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.296 [2024-04-26 12:16:31.615806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.296 [2024-04-26 12:16:31.615860] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.296 [2024-04-26 12:16:31.615888] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.296 [2024-04-26 12:16:31.615896] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.296 [2024-04-26 12:16:31.615904] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.296 [2024-04-26 12:16:31.615932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.231 12:16:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:39.231 12:16:32 -- common/autotest_common.sh@850 -- # return 0 00:19:39.231 12:16:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:39.231 12:16:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:39.231 12:16:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.231 12:16:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.231 12:16:32 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:39.490 [2024-04-26 12:16:32.724456] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.490 12:16:32 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:19:39.490 12:16:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:39.490 12:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:39.490 12:16:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.490 ************************************ 00:19:39.490 START TEST lvs_grow_clean 00:19:39.490 ************************************ 00:19:39.490 12:16:32 -- common/autotest_common.sh@1111 -- # lvs_grow 00:19:39.490 12:16:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:39.491 12:16:32 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:39.749 12:16:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:39.749 12:16:33 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:40.006 12:16:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:40.006 12:16:33 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:40.006 12:16:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:40.263 12:16:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:40.263 12:16:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:40.263 12:16:33 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 87a2de5a-92e4-405b-a653-a7cb98070bec lvol 150 00:19:40.526 12:16:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bb006339-c133-46f2-acd5-0bb976ac5948 00:19:40.526 12:16:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:40.526 12:16:33 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:40.802 [2024-04-26 12:16:34.163018] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:40.802 [2024-04-26 12:16:34.163128] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:40.802 true 00:19:40.802 12:16:34 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:40.802 12:16:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:41.079 12:16:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:41.079 12:16:34 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:41.338 12:16:34 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb006339-c133-46f2-acd5-0bb976ac5948 00:19:41.596 12:16:34 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.853 [2024-04-26 12:16:35.183619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.853 12:16:35 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:42.111 12:16:35 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:42.111 12:16:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66023 00:19:42.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.111 12:16:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.111 12:16:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66023 /var/tmp/bdevperf.sock 00:19:42.111 12:16:35 -- common/autotest_common.sh@817 -- # '[' -z 66023 ']' 00:19:42.111 12:16:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.111 12:16:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:42.111 12:16:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.111 12:16:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:42.111 12:16:35 -- common/autotest_common.sh@10 -- # set +x 00:19:42.111 [2024-04-26 12:16:35.466364] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:19:42.111 [2024-04-26 12:16:35.466696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66023 ] 00:19:42.368 [2024-04-26 12:16:35.598143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.368 [2024-04-26 12:16:35.733053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.299 12:16:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:43.299 12:16:36 -- common/autotest_common.sh@850 -- # return 0 00:19:43.299 12:16:36 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:43.300 Nvme0n1 00:19:43.300 12:16:36 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:43.557 [ 00:19:43.557 { 00:19:43.557 "name": "Nvme0n1", 00:19:43.557 "aliases": [ 00:19:43.557 "bb006339-c133-46f2-acd5-0bb976ac5948" 00:19:43.557 ], 00:19:43.557 "product_name": "NVMe disk", 00:19:43.557 "block_size": 4096, 00:19:43.557 "num_blocks": 38912, 00:19:43.557 "uuid": "bb006339-c133-46f2-acd5-0bb976ac5948", 00:19:43.557 "assigned_rate_limits": { 00:19:43.557 "rw_ios_per_sec": 0, 00:19:43.557 "rw_mbytes_per_sec": 0, 00:19:43.557 "r_mbytes_per_sec": 0, 00:19:43.557 "w_mbytes_per_sec": 0 00:19:43.557 }, 00:19:43.557 "claimed": false, 00:19:43.557 "zoned": false, 00:19:43.557 "supported_io_types": { 00:19:43.557 "read": true, 00:19:43.557 "write": true, 00:19:43.557 "unmap": true, 00:19:43.557 "write_zeroes": true, 00:19:43.557 "flush": true, 00:19:43.557 "reset": true, 00:19:43.557 "compare": true, 00:19:43.557 "compare_and_write": true, 00:19:43.557 "abort": true, 00:19:43.557 "nvme_admin": true, 00:19:43.557 "nvme_io": true 00:19:43.557 }, 00:19:43.557 "memory_domains": [ 00:19:43.557 { 00:19:43.557 "dma_device_id": "system", 00:19:43.557 "dma_device_type": 1 00:19:43.557 } 00:19:43.557 ], 00:19:43.557 "driver_specific": { 00:19:43.557 "nvme": [ 00:19:43.557 { 00:19:43.557 "trid": { 00:19:43.557 "trtype": "TCP", 00:19:43.557 "adrfam": "IPv4", 00:19:43.557 "traddr": "10.0.0.2", 00:19:43.557 "trsvcid": "4420", 00:19:43.557 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:43.557 }, 00:19:43.557 "ctrlr_data": { 00:19:43.557 "cntlid": 1, 00:19:43.557 "vendor_id": "0x8086", 00:19:43.557 "model_number": "SPDK bdev Controller", 00:19:43.557 "serial_number": "SPDK0", 00:19:43.557 "firmware_revision": "24.05", 00:19:43.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:43.557 "oacs": { 00:19:43.557 "security": 0, 00:19:43.557 "format": 0, 00:19:43.557 "firmware": 0, 00:19:43.557 "ns_manage": 0 00:19:43.557 }, 00:19:43.557 "multi_ctrlr": true, 00:19:43.557 "ana_reporting": false 00:19:43.557 }, 00:19:43.557 "vs": { 00:19:43.557 "nvme_version": "1.3" 00:19:43.557 }, 00:19:43.557 "ns_data": { 00:19:43.557 "id": 1, 00:19:43.557 "can_share": true 00:19:43.557 } 00:19:43.557 } 00:19:43.557 ], 00:19:43.557 "mp_policy": "active_passive" 00:19:43.557 } 00:19:43.557 } 00:19:43.557 ] 00:19:43.557 12:16:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66047 00:19:43.557 12:16:36 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.557 12:16:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:43.814 Running I/O for 10 seconds... 00:19:44.830 Latency(us) 00:19:44.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:44.830 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:19:44.830 =================================================================================================================== 00:19:44.830 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:19:44.830 00:19:45.766 12:16:38 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:45.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:45.766 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:19:45.766 =================================================================================================================== 00:19:45.766 Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:19:45.766 00:19:46.025 true 00:19:46.025 12:16:39 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:46.025 12:16:39 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:46.285 12:16:39 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:46.285 12:16:39 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:46.285 12:16:39 -- target/nvmf_lvs_grow.sh@65 -- # wait 66047 00:19:46.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:46.851 Nvme0n1 : 3.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:19:46.851 =================================================================================================================== 00:19:46.851 Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:19:46.851 00:19:47.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:47.786 Nvme0n1 : 4.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:19:47.786 =================================================================================================================== 00:19:47.786 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:19:47.786 00:19:48.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:48.720 Nvme0n1 : 5.00 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:19:48.720 =================================================================================================================== 00:19:48.720 Total : 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:19:48.720 00:19:49.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:49.654 Nvme0n1 : 6.00 7133.17 27.86 0.00 0.00 0.00 0.00 0.00 00:19:49.654 =================================================================================================================== 00:19:49.654 Total : 7133.17 27.86 0.00 0.00 0.00 0.00 0.00 00:19:49.654 00:19:51.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:51.029 Nvme0n1 : 7.00 7148.29 27.92 0.00 0.00 0.00 0.00 0.00 00:19:51.029 =================================================================================================================== 00:19:51.029 Total : 7148.29 27.92 0.00 0.00 0.00 0.00 0.00 00:19:51.029 00:19:51.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:51.965 Nvme0n1 : 8.00 7159.62 27.97 0.00 0.00 0.00 0.00 0.00 00:19:51.965 =================================================================================================================== 00:19:51.965 Total : 7159.62 27.97 0.00 0.00 0.00 0.00 0.00 00:19:51.965 00:19:52.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:52.899 Nvme0n1 : 9.00 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:19:52.899 =================================================================================================================== 00:19:52.899 Total : 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:19:52.899 00:19:53.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:53.832 Nvme0n1 : 10.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:19:53.832 =================================================================================================================== 00:19:53.832 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:19:53.832 00:19:53.832 00:19:53.832 Latency(us) 00:19:53.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:53.832 Nvme0n1 : 10.02 7175.38 28.03 0.00 0.00 17832.16 14060.45 46232.67 00:19:53.833 =================================================================================================================== 00:19:53.833 Total : 7175.38 28.03 0.00 0.00 17832.16 14060.45 46232.67 00:19:53.833 0 00:19:53.833 12:16:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66023 00:19:53.833 12:16:47 -- common/autotest_common.sh@936 -- # '[' -z 66023 ']' 00:19:53.833 12:16:47 -- common/autotest_common.sh@940 -- # kill -0 66023 00:19:53.833 12:16:47 -- common/autotest_common.sh@941 -- # uname 00:19:53.833 12:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.833 12:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66023 00:19:53.833 12:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:53.833 12:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:53.833 killing process with pid 66023 00:19:53.833 12:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66023' 00:19:53.833 12:16:47 -- common/autotest_common.sh@955 -- # kill 66023 00:19:53.833 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.833 00:19:53.833 Latency(us) 00:19:53.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.833 =================================================================================================================== 00:19:53.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.833 12:16:47 -- common/autotest_common.sh@960 -- # wait 66023 00:19:54.091 12:16:47 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:54.349 12:16:47 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:54.349 12:16:47 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:54.607 12:16:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:54.607 12:16:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:54.607 12:16:47 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:54.866 [2024-04-26 12:16:48.112697] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:54.866 12:16:48 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:54.866 12:16:48 -- common/autotest_common.sh@638 -- # local es=0 00:19:54.866 12:16:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:54.866 12:16:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.866 12:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.867 12:16:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.867 12:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.867 12:16:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.867 12:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:54.867 12:16:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.867 12:16:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:54.867 12:16:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:55.126 request: 00:19:55.126 { 00:19:55.126 "uuid": "87a2de5a-92e4-405b-a653-a7cb98070bec", 00:19:55.126 "method": "bdev_lvol_get_lvstores", 00:19:55.126 "req_id": 1 00:19:55.126 } 00:19:55.126 Got JSON-RPC error response 00:19:55.126 response: 00:19:55.126 { 00:19:55.126 "code": -19, 00:19:55.126 "message": "No such device" 00:19:55.126 } 00:19:55.126 12:16:48 -- common/autotest_common.sh@641 -- # es=1 00:19:55.126 12:16:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:55.126 12:16:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:55.126 12:16:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:55.126 12:16:48 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:55.385 aio_bdev 00:19:55.385 12:16:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bb006339-c133-46f2-acd5-0bb976ac5948 00:19:55.385 12:16:48 -- common/autotest_common.sh@885 -- # local bdev_name=bb006339-c133-46f2-acd5-0bb976ac5948 00:19:55.385 12:16:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:55.385 12:16:48 -- common/autotest_common.sh@887 -- # local i 00:19:55.385 12:16:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:55.385 12:16:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:55.385 12:16:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:55.644 12:16:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb006339-c133-46f2-acd5-0bb976ac5948 -t 2000 00:19:55.902 [ 00:19:55.902 { 00:19:55.902 "name": "bb006339-c133-46f2-acd5-0bb976ac5948", 00:19:55.902 "aliases": [ 00:19:55.902 "lvs/lvol" 00:19:55.902 ], 00:19:55.902 "product_name": "Logical Volume", 00:19:55.902 "block_size": 4096, 00:19:55.902 "num_blocks": 38912, 00:19:55.902 "uuid": "bb006339-c133-46f2-acd5-0bb976ac5948", 00:19:55.902 "assigned_rate_limits": { 00:19:55.902 "rw_ios_per_sec": 0, 00:19:55.902 "rw_mbytes_per_sec": 0, 00:19:55.902 "r_mbytes_per_sec": 0, 00:19:55.902 "w_mbytes_per_sec": 0 00:19:55.902 }, 00:19:55.902 "claimed": false, 00:19:55.902 "zoned": false, 00:19:55.902 "supported_io_types": { 00:19:55.902 "read": true, 00:19:55.902 "write": true, 00:19:55.902 "unmap": true, 00:19:55.902 "write_zeroes": true, 00:19:55.902 "flush": false, 00:19:55.902 "reset": true, 00:19:55.902 "compare": false, 00:19:55.902 "compare_and_write": false, 00:19:55.902 "abort": false, 00:19:55.902 "nvme_admin": false, 00:19:55.902 "nvme_io": false 00:19:55.902 }, 00:19:55.902 "driver_specific": { 00:19:55.902 "lvol": { 00:19:55.902 "lvol_store_uuid": "87a2de5a-92e4-405b-a653-a7cb98070bec", 00:19:55.902 "base_bdev": "aio_bdev", 00:19:55.902 "thin_provision": false, 00:19:55.902 "snapshot": false, 00:19:55.902 "clone": false, 00:19:55.902 "esnap_clone": false 00:19:55.902 } 00:19:55.902 } 00:19:55.902 } 00:19:55.902 ] 00:19:55.902 12:16:49 -- common/autotest_common.sh@893 -- # return 0 00:19:55.903 12:16:49 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:55.903 12:16:49 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:56.162 12:16:49 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:56.162 12:16:49 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:56.162 12:16:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:56.421 12:16:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:56.421 12:16:49 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bb006339-c133-46f2-acd5-0bb976ac5948 00:19:56.679 12:16:50 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87a2de5a-92e4-405b-a653-a7cb98070bec 00:19:56.937 12:16:50 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:57.195 12:16:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:57.452 00:19:57.452 ************************************ 00:19:57.452 END TEST lvs_grow_clean 00:19:57.452 ************************************ 00:19:57.452 real 0m18.006s 00:19:57.452 user 0m16.944s 00:19:57.452 sys 0m2.520s 00:19:57.452 12:16:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.452 12:16:50 -- common/autotest_common.sh@10 -- # set +x 00:19:57.452 12:16:50 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:57.452 12:16:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.452 12:16:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.452 12:16:50 -- common/autotest_common.sh@10 -- # set +x 00:19:57.710 ************************************ 00:19:57.710 START TEST lvs_grow_dirty 00:19:57.711 ************************************ 00:19:57.711 12:16:50 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:57.711 12:16:50 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:57.969 12:16:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:57.969 12:16:51 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:58.227 12:16:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=405df341-861b-4ddf-a160-87ad516c8b4e 00:19:58.227 12:16:51 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:19:58.227 12:16:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:58.485 12:16:51 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:58.485 12:16:51 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:58.485 12:16:51 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 405df341-861b-4ddf-a160-87ad516c8b4e lvol 150 00:19:58.744 12:16:52 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d54fdfc2-3447-4167-a30e-c295b1b843ed 00:19:58.744 12:16:52 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:58.744 12:16:52 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:59.002 [2024-04-26 12:16:52.305513] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:59.002 [2024-04-26 12:16:52.305616] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:59.002 true 00:19:59.002 12:16:52 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:19:59.002 12:16:52 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:59.261 12:16:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:59.261 12:16:52 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:59.520 12:16:52 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d54fdfc2-3447-4167-a30e-c295b1b843ed 00:19:59.779 12:16:53 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.036 12:16:53 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:00.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.293 12:16:53 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66295 00:20:00.294 12:16:53 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:00.294 12:16:53 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.294 12:16:53 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66295 /var/tmp/bdevperf.sock 00:20:00.294 12:16:53 -- common/autotest_common.sh@817 -- # '[' -z 66295 ']' 00:20:00.294 12:16:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.294 12:16:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.294 12:16:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.294 12:16:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.294 12:16:53 -- common/autotest_common.sh@10 -- # set +x 00:20:00.294 [2024-04-26 12:16:53.616808] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:00.294 [2024-04-26 12:16:53.617130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66295 ] 00:20:00.294 [2024-04-26 12:16:53.750092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.551 [2024-04-26 12:16:53.875501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.500 12:16:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.500 12:16:54 -- common/autotest_common.sh@850 -- # return 0 00:20:01.500 12:16:54 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:01.500 Nvme0n1 00:20:01.500 12:16:54 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:01.757 [ 00:20:01.757 { 00:20:01.757 "name": "Nvme0n1", 00:20:01.757 "aliases": [ 00:20:01.757 "d54fdfc2-3447-4167-a30e-c295b1b843ed" 00:20:01.757 ], 00:20:01.757 "product_name": "NVMe disk", 00:20:01.757 "block_size": 4096, 00:20:01.757 "num_blocks": 38912, 00:20:01.757 "uuid": "d54fdfc2-3447-4167-a30e-c295b1b843ed", 00:20:01.757 "assigned_rate_limits": { 00:20:01.757 "rw_ios_per_sec": 0, 00:20:01.757 "rw_mbytes_per_sec": 0, 00:20:01.757 "r_mbytes_per_sec": 0, 00:20:01.757 "w_mbytes_per_sec": 0 00:20:01.757 }, 00:20:01.757 "claimed": false, 00:20:01.757 "zoned": false, 00:20:01.757 "supported_io_types": { 00:20:01.757 "read": true, 00:20:01.757 "write": true, 00:20:01.757 "unmap": true, 00:20:01.757 "write_zeroes": true, 00:20:01.757 "flush": true, 00:20:01.757 "reset": true, 00:20:01.757 "compare": true, 00:20:01.757 "compare_and_write": true, 00:20:01.757 "abort": true, 00:20:01.757 "nvme_admin": true, 00:20:01.757 "nvme_io": true 00:20:01.757 }, 00:20:01.757 "memory_domains": [ 00:20:01.757 { 00:20:01.757 "dma_device_id": "system", 00:20:01.757 "dma_device_type": 1 00:20:01.757 } 00:20:01.757 ], 00:20:01.757 "driver_specific": { 00:20:01.757 "nvme": [ 00:20:01.757 { 00:20:01.757 "trid": { 00:20:01.757 "trtype": "TCP", 00:20:01.757 "adrfam": "IPv4", 00:20:01.757 "traddr": "10.0.0.2", 00:20:01.757 "trsvcid": "4420", 00:20:01.757 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:01.757 }, 00:20:01.757 "ctrlr_data": { 00:20:01.757 "cntlid": 1, 00:20:01.757 "vendor_id": "0x8086", 00:20:01.757 "model_number": "SPDK bdev Controller", 00:20:01.757 "serial_number": "SPDK0", 00:20:01.757 "firmware_revision": "24.05", 00:20:01.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.757 "oacs": { 00:20:01.757 "security": 0, 00:20:01.757 "format": 0, 00:20:01.757 "firmware": 0, 00:20:01.757 "ns_manage": 0 00:20:01.757 }, 00:20:01.757 "multi_ctrlr": true, 00:20:01.757 "ana_reporting": false 00:20:01.757 }, 00:20:01.757 "vs": { 00:20:01.757 "nvme_version": "1.3" 00:20:01.757 }, 00:20:01.757 "ns_data": { 00:20:01.757 "id": 1, 00:20:01.757 "can_share": true 00:20:01.757 } 00:20:01.757 } 00:20:01.757 ], 00:20:01.757 "mp_policy": "active_passive" 00:20:01.757 } 00:20:01.757 } 00:20:01.757 ] 00:20:01.757 12:16:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66320 00:20:01.757 12:16:55 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.757 12:16:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:02.015 Running I/O for 10 seconds... 00:20:02.950 Latency(us) 00:20:02.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.950 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:20:02.950 =================================================================================================================== 00:20:02.950 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:20:02.950 00:20:03.940 12:16:57 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:03.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:03.940 Nvme0n1 : 2.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:20:03.940 =================================================================================================================== 00:20:03.940 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:20:03.940 00:20:04.199 true 00:20:04.199 12:16:57 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:04.199 12:16:57 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:04.458 12:16:57 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:04.458 12:16:57 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:04.458 12:16:57 -- target/nvmf_lvs_grow.sh@65 -- # wait 66320 00:20:05.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:05.025 Nvme0n1 : 3.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:20:05.025 =================================================================================================================== 00:20:05.025 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:20:05.025 00:20:05.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:05.961 Nvme0n1 : 4.00 7524.75 29.39 0.00 0.00 0.00 0.00 0.00 00:20:05.961 =================================================================================================================== 00:20:05.961 Total : 7524.75 29.39 0.00 0.00 0.00 0.00 0.00 00:20:05.961 00:20:06.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.981 Nvme0n1 : 5.00 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:20:06.981 =================================================================================================================== 00:20:06.981 Total : 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:20:06.981 00:20:07.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:07.924 Nvme0n1 : 6.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:20:07.924 =================================================================================================================== 00:20:07.924 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:20:07.924 00:20:08.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.859 Nvme0n1 : 7.00 7474.57 29.20 0.00 0.00 0.00 0.00 0.00 00:20:08.859 =================================================================================================================== 00:20:08.859 Total : 7474.57 29.20 0.00 0.00 0.00 0.00 0.00 00:20:08.859 00:20:10.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:10.232 Nvme0n1 : 8.00 7445.12 29.08 0.00 0.00 0.00 0.00 0.00 00:20:10.232 =================================================================================================================== 00:20:10.232 Total : 7445.12 29.08 0.00 0.00 0.00 0.00 0.00 00:20:10.232 00:20:11.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:11.168 Nvme0n1 : 9.00 7408.11 28.94 0.00 0.00 0.00 0.00 0.00 00:20:11.168 =================================================================================================================== 00:20:11.168 Total : 7408.11 28.94 0.00 0.00 0.00 0.00 0.00 00:20:11.168 00:20:12.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:12.098 Nvme0n1 : 10.00 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:20:12.098 =================================================================================================================== 00:20:12.098 Total : 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:20:12.098 00:20:12.098 00:20:12.098 Latency(us) 00:20:12.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:12.098 Nvme0n1 : 10.02 7391.18 28.87 0.00 0.00 17312.69 11558.17 89605.59 00:20:12.098 =================================================================================================================== 00:20:12.098 Total : 7391.18 28.87 0.00 0.00 17312.69 11558.17 89605.59 00:20:12.098 0 00:20:12.098 12:17:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66295 00:20:12.098 12:17:05 -- common/autotest_common.sh@936 -- # '[' -z 66295 ']' 00:20:12.098 12:17:05 -- common/autotest_common.sh@940 -- # kill -0 66295 00:20:12.098 12:17:05 -- common/autotest_common.sh@941 -- # uname 00:20:12.098 12:17:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.098 12:17:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66295 00:20:12.098 killing process with pid 66295 00:20:12.098 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.098 00:20:12.098 Latency(us) 00:20:12.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.098 =================================================================================================================== 00:20:12.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.098 12:17:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:12.098 12:17:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:12.098 12:17:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66295' 00:20:12.098 12:17:05 -- common/autotest_common.sh@955 -- # kill 66295 00:20:12.098 12:17:05 -- common/autotest_common.sh@960 -- # wait 66295 00:20:12.356 12:17:05 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:12.613 12:17:05 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:12.613 12:17:05 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:20:12.871 12:17:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:20:12.871 12:17:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:20:12.871 12:17:06 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 65931 00:20:12.871 12:17:06 -- target/nvmf_lvs_grow.sh@74 -- # wait 65931 00:20:12.871 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 65931 Killed "${NVMF_APP[@]}" "$@" 00:20:12.871 12:17:06 -- target/nvmf_lvs_grow.sh@74 -- # true 00:20:12.871 12:17:06 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:20:12.871 12:17:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.871 12:17:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.871 12:17:06 -- common/autotest_common.sh@10 -- # set +x 00:20:12.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.871 12:17:06 -- nvmf/common.sh@470 -- # nvmfpid=66452 00:20:12.871 12:17:06 -- nvmf/common.sh@471 -- # waitforlisten 66452 00:20:12.871 12:17:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:12.871 12:17:06 -- common/autotest_common.sh@817 -- # '[' -z 66452 ']' 00:20:12.871 12:17:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.871 12:17:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.871 12:17:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.871 12:17:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.871 12:17:06 -- common/autotest_common.sh@10 -- # set +x 00:20:12.871 [2024-04-26 12:17:06.250056] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:12.871 [2024-04-26 12:17:06.250137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.129 [2024-04-26 12:17:06.385330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.129 [2024-04-26 12:17:06.489541] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.129 [2024-04-26 12:17:06.489600] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.129 [2024-04-26 12:17:06.489612] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.129 [2024-04-26 12:17:06.489620] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.129 [2024-04-26 12:17:06.489627] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.129 [2024-04-26 12:17:06.489664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.061 12:17:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.061 12:17:07 -- common/autotest_common.sh@850 -- # return 0 00:20:14.061 12:17:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.061 12:17:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.061 12:17:07 -- common/autotest_common.sh@10 -- # set +x 00:20:14.061 12:17:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.061 12:17:07 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:14.061 [2024-04-26 12:17:07.522599] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:14.061 [2024-04-26 12:17:07.523080] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:14.061 [2024-04-26 12:17:07.523241] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:14.321 12:17:07 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:20:14.321 12:17:07 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev d54fdfc2-3447-4167-a30e-c295b1b843ed 00:20:14.321 12:17:07 -- common/autotest_common.sh@885 -- # local bdev_name=d54fdfc2-3447-4167-a30e-c295b1b843ed 00:20:14.321 12:17:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:14.321 12:17:07 -- common/autotest_common.sh@887 -- # local i 00:20:14.321 12:17:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:14.321 12:17:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:14.321 12:17:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:14.587 12:17:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d54fdfc2-3447-4167-a30e-c295b1b843ed -t 2000 00:20:14.587 [ 00:20:14.587 { 00:20:14.587 "name": "d54fdfc2-3447-4167-a30e-c295b1b843ed", 00:20:14.587 "aliases": [ 00:20:14.587 "lvs/lvol" 00:20:14.587 ], 00:20:14.587 "product_name": "Logical Volume", 00:20:14.587 "block_size": 4096, 00:20:14.587 "num_blocks": 38912, 00:20:14.587 "uuid": "d54fdfc2-3447-4167-a30e-c295b1b843ed", 00:20:14.587 "assigned_rate_limits": { 00:20:14.587 "rw_ios_per_sec": 0, 00:20:14.587 "rw_mbytes_per_sec": 0, 00:20:14.587 "r_mbytes_per_sec": 0, 00:20:14.587 "w_mbytes_per_sec": 0 00:20:14.587 }, 00:20:14.587 "claimed": false, 00:20:14.587 "zoned": false, 00:20:14.587 "supported_io_types": { 00:20:14.587 "read": true, 00:20:14.587 "write": true, 00:20:14.587 "unmap": true, 00:20:14.587 "write_zeroes": true, 00:20:14.587 "flush": false, 00:20:14.587 "reset": true, 00:20:14.587 "compare": false, 00:20:14.587 "compare_and_write": false, 00:20:14.587 "abort": false, 00:20:14.587 "nvme_admin": false, 00:20:14.587 "nvme_io": false 00:20:14.587 }, 00:20:14.587 "driver_specific": { 00:20:14.587 "lvol": { 00:20:14.587 "lvol_store_uuid": "405df341-861b-4ddf-a160-87ad516c8b4e", 00:20:14.587 "base_bdev": "aio_bdev", 00:20:14.587 "thin_provision": false, 00:20:14.587 "snapshot": false, 00:20:14.587 "clone": false, 00:20:14.588 "esnap_clone": false 00:20:14.588 } 00:20:14.588 } 00:20:14.588 } 00:20:14.588 ] 00:20:14.588 12:17:08 -- common/autotest_common.sh@893 -- # return 0 00:20:14.588 12:17:08 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:14.588 12:17:08 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:20:15.153 12:17:08 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:20:15.153 12:17:08 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:15.153 12:17:08 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:20:15.153 12:17:08 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:20:15.153 12:17:08 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:15.411 [2024-04-26 12:17:08.828075] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:15.411 12:17:08 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:15.411 12:17:08 -- common/autotest_common.sh@638 -- # local es=0 00:20:15.411 12:17:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:15.411 12:17:08 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.411 12:17:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:15.411 12:17:08 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.411 12:17:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:15.411 12:17:08 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.411 12:17:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:15.411 12:17:08 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.411 12:17:08 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:15.411 12:17:08 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:15.669 request: 00:20:15.669 { 00:20:15.669 "uuid": "405df341-861b-4ddf-a160-87ad516c8b4e", 00:20:15.669 "method": "bdev_lvol_get_lvstores", 00:20:15.669 "req_id": 1 00:20:15.669 } 00:20:15.669 Got JSON-RPC error response 00:20:15.669 response: 00:20:15.669 { 00:20:15.669 "code": -19, 00:20:15.669 "message": "No such device" 00:20:15.669 } 00:20:15.928 12:17:09 -- common/autotest_common.sh@641 -- # es=1 00:20:15.928 12:17:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:15.928 12:17:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:15.928 12:17:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:15.928 12:17:09 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:16.185 aio_bdev 00:20:16.185 12:17:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d54fdfc2-3447-4167-a30e-c295b1b843ed 00:20:16.185 12:17:09 -- common/autotest_common.sh@885 -- # local bdev_name=d54fdfc2-3447-4167-a30e-c295b1b843ed 00:20:16.185 12:17:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:16.185 12:17:09 -- common/autotest_common.sh@887 -- # local i 00:20:16.185 12:17:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:16.185 12:17:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:16.185 12:17:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:16.185 12:17:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d54fdfc2-3447-4167-a30e-c295b1b843ed -t 2000 00:20:16.443 [ 00:20:16.443 { 00:20:16.443 "name": "d54fdfc2-3447-4167-a30e-c295b1b843ed", 00:20:16.443 "aliases": [ 00:20:16.443 "lvs/lvol" 00:20:16.443 ], 00:20:16.443 "product_name": "Logical Volume", 00:20:16.443 "block_size": 4096, 00:20:16.443 "num_blocks": 38912, 00:20:16.443 "uuid": "d54fdfc2-3447-4167-a30e-c295b1b843ed", 00:20:16.443 "assigned_rate_limits": { 00:20:16.443 "rw_ios_per_sec": 0, 00:20:16.443 "rw_mbytes_per_sec": 0, 00:20:16.443 "r_mbytes_per_sec": 0, 00:20:16.443 "w_mbytes_per_sec": 0 00:20:16.443 }, 00:20:16.443 "claimed": false, 00:20:16.443 "zoned": false, 00:20:16.443 "supported_io_types": { 00:20:16.443 "read": true, 00:20:16.443 "write": true, 00:20:16.443 "unmap": true, 00:20:16.443 "write_zeroes": true, 00:20:16.443 "flush": false, 00:20:16.443 "reset": true, 00:20:16.443 "compare": false, 00:20:16.443 "compare_and_write": false, 00:20:16.443 "abort": false, 00:20:16.443 "nvme_admin": false, 00:20:16.443 "nvme_io": false 00:20:16.443 }, 00:20:16.443 "driver_specific": { 00:20:16.443 "lvol": { 00:20:16.443 "lvol_store_uuid": "405df341-861b-4ddf-a160-87ad516c8b4e", 00:20:16.443 "base_bdev": "aio_bdev", 00:20:16.443 "thin_provision": false, 00:20:16.443 "snapshot": false, 00:20:16.443 "clone": false, 00:20:16.443 "esnap_clone": false 00:20:16.443 } 00:20:16.443 } 00:20:16.443 } 00:20:16.443 ] 00:20:16.443 12:17:09 -- common/autotest_common.sh@893 -- # return 0 00:20:16.443 12:17:09 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:16.443 12:17:09 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:20:16.701 12:17:10 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:20:16.701 12:17:10 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:16.701 12:17:10 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:20:16.960 12:17:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:20:16.960 12:17:10 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d54fdfc2-3447-4167-a30e-c295b1b843ed 00:20:17.218 12:17:10 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 405df341-861b-4ddf-a160-87ad516c8b4e 00:20:17.478 12:17:10 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:17.736 12:17:11 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:17.994 00:20:17.994 real 0m20.490s 00:20:17.994 user 0m43.149s 00:20:17.994 sys 0m8.053s 00:20:17.994 ************************************ 00:20:17.994 END TEST lvs_grow_dirty 00:20:17.994 ************************************ 00:20:17.994 12:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:17.994 12:17:11 -- common/autotest_common.sh@10 -- # set +x 00:20:18.272 12:17:11 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:18.272 12:17:11 -- common/autotest_common.sh@794 -- # type=--id 00:20:18.272 12:17:11 -- common/autotest_common.sh@795 -- # id=0 00:20:18.272 12:17:11 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:18.272 12:17:11 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.272 12:17:11 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:18.272 12:17:11 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:18.272 12:17:11 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:18.272 12:17:11 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.272 nvmf_trace.0 00:20:18.272 12:17:11 -- common/autotest_common.sh@809 -- # return 0 00:20:18.272 12:17:11 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:18.272 12:17:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:18.272 12:17:11 -- nvmf/common.sh@117 -- # sync 00:20:18.272 12:17:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.272 12:17:11 -- nvmf/common.sh@120 -- # set +e 00:20:18.272 12:17:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.272 12:17:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.272 rmmod nvme_tcp 00:20:18.272 rmmod nvme_fabrics 00:20:18.272 rmmod nvme_keyring 00:20:18.272 12:17:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.536 12:17:11 -- nvmf/common.sh@124 -- # set -e 00:20:18.536 12:17:11 -- nvmf/common.sh@125 -- # return 0 00:20:18.536 12:17:11 -- nvmf/common.sh@478 -- # '[' -n 66452 ']' 00:20:18.536 12:17:11 -- nvmf/common.sh@479 -- # killprocess 66452 00:20:18.536 12:17:11 -- common/autotest_common.sh@936 -- # '[' -z 66452 ']' 00:20:18.536 12:17:11 -- common/autotest_common.sh@940 -- # kill -0 66452 00:20:18.536 12:17:11 -- common/autotest_common.sh@941 -- # uname 00:20:18.536 12:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.536 12:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66452 00:20:18.536 12:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:18.536 killing process with pid 66452 00:20:18.536 12:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:18.536 12:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66452' 00:20:18.536 12:17:11 -- common/autotest_common.sh@955 -- # kill 66452 00:20:18.536 12:17:11 -- common/autotest_common.sh@960 -- # wait 66452 00:20:18.536 12:17:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:18.536 12:17:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:18.536 12:17:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:18.536 12:17:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.536 12:17:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.536 12:17:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.536 12:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.536 12:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.794 12:17:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:18.794 ************************************ 00:20:18.794 END TEST nvmf_lvs_grow 00:20:18.794 ************************************ 00:20:18.794 00:20:18.794 real 0m41.176s 00:20:18.794 user 1m6.553s 00:20:18.794 sys 0m11.343s 00:20:18.794 12:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:18.794 12:17:12 -- common/autotest_common.sh@10 -- # set +x 00:20:18.794 12:17:12 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:18.794 12:17:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:18.794 12:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.794 12:17:12 -- common/autotest_common.sh@10 -- # set +x 00:20:18.794 ************************************ 00:20:18.794 START TEST nvmf_bdev_io_wait 00:20:18.794 ************************************ 00:20:18.794 12:17:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:18.794 * Looking for test storage... 00:20:18.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:18.794 12:17:12 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.794 12:17:12 -- nvmf/common.sh@7 -- # uname -s 00:20:18.794 12:17:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.794 12:17:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.794 12:17:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.794 12:17:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.794 12:17:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.794 12:17:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.795 12:17:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.795 12:17:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.795 12:17:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.795 12:17:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.795 12:17:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:18.795 12:17:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:18.795 12:17:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.795 12:17:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.795 12:17:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.795 12:17:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.795 12:17:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.795 12:17:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.795 12:17:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.795 12:17:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.795 12:17:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.795 12:17:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.795 12:17:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.795 12:17:12 -- paths/export.sh@5 -- # export PATH 00:20:18.795 12:17:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.795 12:17:12 -- nvmf/common.sh@47 -- # : 0 00:20:18.795 12:17:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.795 12:17:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.795 12:17:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.795 12:17:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.795 12:17:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.795 12:17:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.795 12:17:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.795 12:17:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.795 12:17:12 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.795 12:17:12 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.795 12:17:12 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:18.795 12:17:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:19.054 12:17:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.054 12:17:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:19.054 12:17:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:19.054 12:17:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:19.054 12:17:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.054 12:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.054 12:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.054 12:17:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:19.054 12:17:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:19.054 12:17:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:19.054 12:17:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:19.054 12:17:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:19.054 12:17:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:19.054 12:17:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.054 12:17:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.054 12:17:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:19.054 12:17:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:19.054 12:17:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.054 12:17:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.054 12:17:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.054 12:17:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.054 12:17:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.054 12:17:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.054 12:17:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.054 12:17:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.054 12:17:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:19.054 12:17:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:19.054 Cannot find device "nvmf_tgt_br" 00:20:19.054 12:17:12 -- nvmf/common.sh@155 -- # true 00:20:19.054 12:17:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.054 Cannot find device "nvmf_tgt_br2" 00:20:19.054 12:17:12 -- nvmf/common.sh@156 -- # true 00:20:19.054 12:17:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:19.054 12:17:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:19.054 Cannot find device "nvmf_tgt_br" 00:20:19.054 12:17:12 -- nvmf/common.sh@158 -- # true 00:20:19.054 12:17:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:19.054 Cannot find device "nvmf_tgt_br2" 00:20:19.054 12:17:12 -- nvmf/common.sh@159 -- # true 00:20:19.054 12:17:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:19.054 12:17:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:19.054 12:17:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.054 12:17:12 -- nvmf/common.sh@162 -- # true 00:20:19.054 12:17:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.054 12:17:12 -- nvmf/common.sh@163 -- # true 00:20:19.054 12:17:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.054 12:17:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.054 12:17:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.054 12:17:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.054 12:17:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.054 12:17:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.054 12:17:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.054 12:17:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:19.054 12:17:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:19.054 12:17:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:19.054 12:17:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:19.054 12:17:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:19.054 12:17:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:19.054 12:17:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.054 12:17:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.054 12:17:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.054 12:17:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:19.054 12:17:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:19.054 12:17:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.313 12:17:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.313 12:17:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.313 12:17:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.313 12:17:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.313 12:17:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:19.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:19.313 00:20:19.313 --- 10.0.0.2 ping statistics --- 00:20:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.313 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:19.313 12:17:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:19.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:19.313 00:20:19.313 --- 10.0.0.3 ping statistics --- 00:20:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.313 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:19.313 12:17:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:20:19.313 00:20:19.313 --- 10.0.0.1 ping statistics --- 00:20:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.313 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:20:19.313 12:17:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.313 12:17:12 -- nvmf/common.sh@422 -- # return 0 00:20:19.313 12:17:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:19.313 12:17:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.313 12:17:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:19.313 12:17:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:19.313 12:17:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.313 12:17:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:19.313 12:17:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:19.313 12:17:12 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:19.313 12:17:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:19.313 12:17:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:19.314 12:17:12 -- common/autotest_common.sh@10 -- # set +x 00:20:19.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.314 12:17:12 -- nvmf/common.sh@470 -- # nvmfpid=66767 00:20:19.314 12:17:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:19.314 12:17:12 -- nvmf/common.sh@471 -- # waitforlisten 66767 00:20:19.314 12:17:12 -- common/autotest_common.sh@817 -- # '[' -z 66767 ']' 00:20:19.314 12:17:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.314 12:17:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:19.314 12:17:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.314 12:17:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:19.314 12:17:12 -- common/autotest_common.sh@10 -- # set +x 00:20:19.314 [2024-04-26 12:17:12.653927] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:19.314 [2024-04-26 12:17:12.654054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.572 [2024-04-26 12:17:12.791642] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.572 [2024-04-26 12:17:12.907899] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.572 [2024-04-26 12:17:12.908159] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.572 [2024-04-26 12:17:12.908314] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.572 [2024-04-26 12:17:12.908458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.572 [2024-04-26 12:17:12.908504] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.572 [2024-04-26 12:17:12.910835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.572 [2024-04-26 12:17:12.911189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.572 [2024-04-26 12:17:12.911199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.572 [2024-04-26 12:17:12.911013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.138 12:17:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.138 12:17:13 -- common/autotest_common.sh@850 -- # return 0 00:20:20.138 12:17:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:20.138 12:17:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:20.138 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 12:17:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 [2024-04-26 12:17:13.693128] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 Malloc0 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.396 12:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.396 12:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 [2024-04-26 12:17:13.760099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.396 12:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66802 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@30 -- # READ_PID=66804 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # config=() 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.396 12:17:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.396 { 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme$subsystem", 00:20:20.396 "trtype": "$TEST_TRANSPORT", 00:20:20.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "$NVMF_PORT", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.396 "hdgst": ${hdgst:-false}, 00:20:20.396 "ddgst": ${ddgst:-false} 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 } 00:20:20.396 EOF 00:20:20.396 )") 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66806 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # config=() 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.396 12:17:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.396 { 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme$subsystem", 00:20:20.396 "trtype": "$TEST_TRANSPORT", 00:20:20.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "$NVMF_PORT", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.396 "hdgst": ${hdgst:-false}, 00:20:20.396 "ddgst": ${ddgst:-false} 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 } 00:20:20.396 EOF 00:20:20.396 )") 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66809 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # cat 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@35 -- # sync 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # cat 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # config=() 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.396 12:17:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.396 { 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme$subsystem", 00:20:20.396 "trtype": "$TEST_TRANSPORT", 00:20:20.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "$NVMF_PORT", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.396 "hdgst": ${hdgst:-false}, 00:20:20.396 "ddgst": ${ddgst:-false} 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 } 00:20:20.396 EOF 00:20:20.396 )") 00:20:20.396 12:17:13 -- nvmf/common.sh@545 -- # jq . 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # cat 00:20:20.396 12:17:13 -- nvmf/common.sh@545 -- # jq . 00:20:20.396 12:17:13 -- nvmf/common.sh@546 -- # IFS=, 00:20:20.396 12:17:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme1", 00:20:20.396 "trtype": "tcp", 00:20:20.396 "traddr": "10.0.0.2", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "4420", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.396 "hdgst": false, 00:20:20.396 "ddgst": false 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 }' 00:20:20.396 12:17:13 -- nvmf/common.sh@546 -- # IFS=, 00:20:20.396 12:17:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme1", 00:20:20.396 "trtype": "tcp", 00:20:20.396 "traddr": "10.0.0.2", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "4420", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.396 "hdgst": false, 00:20:20.396 "ddgst": false 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 }' 00:20:20.396 12:17:13 -- nvmf/common.sh@545 -- # jq . 00:20:20.396 12:17:13 -- nvmf/common.sh@546 -- # IFS=, 00:20:20.396 12:17:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme1", 00:20:20.396 "trtype": "tcp", 00:20:20.396 "traddr": "10.0.0.2", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "4420", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.396 "hdgst": false, 00:20:20.396 "ddgst": false 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 }' 00:20:20.396 12:17:13 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # config=() 00:20:20.396 12:17:13 -- nvmf/common.sh@521 -- # local subsystem config 00:20:20.396 12:17:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:20.396 { 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme$subsystem", 00:20:20.396 "trtype": "$TEST_TRANSPORT", 00:20:20.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "$NVMF_PORT", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.396 "hdgst": ${hdgst:-false}, 00:20:20.396 "ddgst": ${ddgst:-false} 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 } 00:20:20.396 EOF 00:20:20.396 )") 00:20:20.396 12:17:13 -- nvmf/common.sh@543 -- # cat 00:20:20.396 12:17:13 -- nvmf/common.sh@545 -- # jq . 00:20:20.396 12:17:13 -- nvmf/common.sh@546 -- # IFS=, 00:20:20.396 12:17:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:20.396 "params": { 00:20:20.396 "name": "Nvme1", 00:20:20.396 "trtype": "tcp", 00:20:20.396 "traddr": "10.0.0.2", 00:20:20.396 "adrfam": "ipv4", 00:20:20.396 "trsvcid": "4420", 00:20:20.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.396 "hdgst": false, 00:20:20.396 "ddgst": false 00:20:20.396 }, 00:20:20.396 "method": "bdev_nvme_attach_controller" 00:20:20.396 }' 00:20:20.396 [2024-04-26 12:17:13.821859] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:20.396 [2024-04-26 12:17:13.821955] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:20.396 [2024-04-26 12:17:13.824247] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:20.397 [2024-04-26 12:17:13.824809] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:20.397 12:17:13 -- target/bdev_io_wait.sh@37 -- # wait 66802 00:20:20.397 [2024-04-26 12:17:13.843845] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:20.397 [2024-04-26 12:17:13.844290] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:20.397 [2024-04-26 12:17:13.859640] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:20.397 [2024-04-26 12:17:13.859735] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:20.654 [2024-04-26 12:17:14.035356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.654 [2024-04-26 12:17:14.111375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.911 [2024-04-26 12:17:14.123910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:20.911 [2024-04-26 12:17:14.181613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.911 [2024-04-26 12:17:14.215534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:20.911 [2024-04-26 12:17:14.256586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.911 Running I/O for 1 seconds... 00:20:20.911 [2024-04-26 12:17:14.283034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.911 [2024-04-26 12:17:14.358505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:20.911 Running I/O for 1 seconds... 00:20:21.169 Running I/O for 1 seconds... 00:20:21.169 Running I/O for 1 seconds... 00:20:22.102 00:20:22.102 Latency(us) 00:20:22.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.103 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:22.103 Nvme1n1 : 1.00 179395.83 700.76 0.00 0.00 710.91 322.09 942.08 00:20:22.103 =================================================================================================================== 00:20:22.103 Total : 179395.83 700.76 0.00 0.00 710.91 322.09 942.08 00:20:22.103 00:20:22.103 Latency(us) 00:20:22.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.103 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:22.103 Nvme1n1 : 1.01 8736.78 34.13 0.00 0.00 14574.00 9830.40 21567.30 00:20:22.103 =================================================================================================================== 00:20:22.103 Total : 8736.78 34.13 0.00 0.00 14574.00 9830.40 21567.30 00:20:22.103 00:20:22.103 Latency(us) 00:20:22.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.103 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:22.103 Nvme1n1 : 1.01 7805.75 30.49 0.00 0.00 16309.29 8877.15 28716.68 00:20:22.103 =================================================================================================================== 00:20:22.103 Total : 7805.75 30.49 0.00 0.00 16309.29 8877.15 28716.68 00:20:22.103 00:20:22.103 Latency(us) 00:20:22.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.103 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:22.103 Nvme1n1 : 1.01 8819.67 34.45 0.00 0.00 14452.52 6762.12 21567.30 00:20:22.103 =================================================================================================================== 00:20:22.103 Total : 8819.67 34.45 0.00 0.00 14452.52 6762.12 21567.30 00:20:22.360 12:17:15 -- target/bdev_io_wait.sh@38 -- # wait 66804 00:20:22.360 12:17:15 -- target/bdev_io_wait.sh@39 -- # wait 66806 00:20:22.360 12:17:15 -- target/bdev_io_wait.sh@40 -- # wait 66809 00:20:22.360 12:17:15 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.360 12:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.360 12:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:22.360 12:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.360 12:17:15 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:22.360 12:17:15 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:22.360 12:17:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:22.360 12:17:15 -- nvmf/common.sh@117 -- # sync 00:20:22.619 12:17:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.619 12:17:15 -- nvmf/common.sh@120 -- # set +e 00:20:22.619 12:17:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.619 12:17:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.619 rmmod nvme_tcp 00:20:22.619 rmmod nvme_fabrics 00:20:22.619 rmmod nvme_keyring 00:20:22.619 12:17:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.619 12:17:15 -- nvmf/common.sh@124 -- # set -e 00:20:22.619 12:17:15 -- nvmf/common.sh@125 -- # return 0 00:20:22.619 12:17:15 -- nvmf/common.sh@478 -- # '[' -n 66767 ']' 00:20:22.619 12:17:15 -- nvmf/common.sh@479 -- # killprocess 66767 00:20:22.619 12:17:15 -- common/autotest_common.sh@936 -- # '[' -z 66767 ']' 00:20:22.619 12:17:15 -- common/autotest_common.sh@940 -- # kill -0 66767 00:20:22.619 12:17:15 -- common/autotest_common.sh@941 -- # uname 00:20:22.619 12:17:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.619 12:17:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66767 00:20:22.619 12:17:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:22.619 killing process with pid 66767 00:20:22.619 12:17:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:22.619 12:17:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66767' 00:20:22.619 12:17:15 -- common/autotest_common.sh@955 -- # kill 66767 00:20:22.619 12:17:15 -- common/autotest_common.sh@960 -- # wait 66767 00:20:22.878 12:17:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:22.878 12:17:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:22.878 12:17:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:22.878 12:17:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.878 12:17:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.878 12:17:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.878 12:17:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.878 12:17:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.878 12:17:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:22.878 00:20:22.878 real 0m4.027s 00:20:22.878 user 0m17.601s 00:20:22.878 sys 0m2.219s 00:20:22.878 12:17:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:22.878 12:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:22.878 ************************************ 00:20:22.878 END TEST nvmf_bdev_io_wait 00:20:22.878 ************************************ 00:20:22.878 12:17:16 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:22.878 12:17:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:22.878 12:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.878 12:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:22.878 ************************************ 00:20:22.878 START TEST nvmf_queue_depth 00:20:22.878 ************************************ 00:20:22.878 12:17:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:23.137 * Looking for test storage... 00:20:23.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:23.137 12:17:16 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.137 12:17:16 -- nvmf/common.sh@7 -- # uname -s 00:20:23.137 12:17:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.137 12:17:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.137 12:17:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.137 12:17:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.137 12:17:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.137 12:17:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.137 12:17:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.137 12:17:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.137 12:17:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.137 12:17:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.137 12:17:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:23.137 12:17:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:23.137 12:17:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.137 12:17:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.137 12:17:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.137 12:17:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.137 12:17:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.137 12:17:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.137 12:17:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.137 12:17:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.137 12:17:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.137 12:17:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.137 12:17:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.137 12:17:16 -- paths/export.sh@5 -- # export PATH 00:20:23.137 12:17:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.137 12:17:16 -- nvmf/common.sh@47 -- # : 0 00:20:23.137 12:17:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.137 12:17:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.137 12:17:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.137 12:17:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.137 12:17:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.137 12:17:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.137 12:17:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.137 12:17:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.137 12:17:16 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:23.137 12:17:16 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:23.137 12:17:16 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.137 12:17:16 -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:23.137 12:17:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:23.137 12:17:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.137 12:17:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:23.137 12:17:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:23.137 12:17:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:23.137 12:17:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.137 12:17:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.137 12:17:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.137 12:17:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:23.137 12:17:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:23.137 12:17:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:23.137 12:17:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:23.137 12:17:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:23.137 12:17:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:23.137 12:17:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.137 12:17:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.137 12:17:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.137 12:17:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:23.137 12:17:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.137 12:17:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.137 12:17:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.137 12:17:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.137 12:17:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.137 12:17:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.137 12:17:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.137 12:17:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.137 12:17:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:23.137 12:17:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:23.137 Cannot find device "nvmf_tgt_br" 00:20:23.137 12:17:16 -- nvmf/common.sh@155 -- # true 00:20:23.137 12:17:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.137 Cannot find device "nvmf_tgt_br2" 00:20:23.137 12:17:16 -- nvmf/common.sh@156 -- # true 00:20:23.137 12:17:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:23.137 12:17:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:23.137 Cannot find device "nvmf_tgt_br" 00:20:23.137 12:17:16 -- nvmf/common.sh@158 -- # true 00:20:23.137 12:17:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:23.137 Cannot find device "nvmf_tgt_br2" 00:20:23.137 12:17:16 -- nvmf/common.sh@159 -- # true 00:20:23.137 12:17:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:23.137 12:17:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:23.137 12:17:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.137 12:17:16 -- nvmf/common.sh@162 -- # true 00:20:23.137 12:17:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.137 12:17:16 -- nvmf/common.sh@163 -- # true 00:20:23.137 12:17:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.137 12:17:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.137 12:17:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.137 12:17:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.137 12:17:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.137 12:17:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.137 12:17:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.137 12:17:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.398 12:17:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.398 12:17:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:23.398 12:17:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:23.398 12:17:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:23.398 12:17:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:23.398 12:17:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.398 12:17:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.398 12:17:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.398 12:17:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:23.398 12:17:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:23.398 12:17:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.398 12:17:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.398 12:17:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.398 12:17:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.398 12:17:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.398 12:17:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:23.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:20:23.398 00:20:23.398 --- 10.0.0.2 ping statistics --- 00:20:23.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.398 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:23.398 12:17:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:23.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:23.398 00:20:23.398 --- 10.0.0.3 ping statistics --- 00:20:23.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.398 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:23.398 12:17:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:23.398 00:20:23.398 --- 10.0.0.1 ping statistics --- 00:20:23.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.398 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:23.398 12:17:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.398 12:17:16 -- nvmf/common.sh@422 -- # return 0 00:20:23.398 12:17:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:23.398 12:17:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.398 12:17:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:23.398 12:17:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:23.398 12:17:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.398 12:17:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:23.398 12:17:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:23.398 12:17:16 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:23.398 12:17:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:23.398 12:17:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:23.398 12:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 12:17:16 -- nvmf/common.sh@470 -- # nvmfpid=67047 00:20:23.398 12:17:16 -- nvmf/common.sh@471 -- # waitforlisten 67047 00:20:23.398 12:17:16 -- common/autotest_common.sh@817 -- # '[' -z 67047 ']' 00:20:23.398 12:17:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.398 12:17:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:23.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.398 12:17:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.398 12:17:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.398 12:17:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:23.398 12:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 [2024-04-26 12:17:16.814953] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:23.398 [2024-04-26 12:17:16.815058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.657 [2024-04-26 12:17:16.953854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.658 [2024-04-26 12:17:17.091027] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.658 [2024-04-26 12:17:17.091106] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.658 [2024-04-26 12:17:17.091123] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.658 [2024-04-26 12:17:17.091133] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.658 [2024-04-26 12:17:17.091140] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.658 [2024-04-26 12:17:17.091190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.592 12:17:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:24.592 12:17:17 -- common/autotest_common.sh@850 -- # return 0 00:20:24.592 12:17:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:24.592 12:17:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:24.592 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.592 12:17:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.592 12:17:17 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.592 12:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.592 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.592 [2024-04-26 12:17:17.845325] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.592 12:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.592 12:17:17 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.592 12:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.592 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.592 Malloc0 00:20:24.592 12:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.592 12:17:17 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.592 12:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.592 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.592 12:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.592 12:17:17 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:24.592 12:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.592 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.592 12:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.592 12:17:17 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.592 12:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.592 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.592 [2024-04-26 12:17:17.908616] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.592 12:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.592 12:17:17 -- target/queue_depth.sh@30 -- # bdevperf_pid=67079 00:20:24.592 12:17:17 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.592 12:17:17 -- target/queue_depth.sh@33 -- # waitforlisten 67079 /var/tmp/bdevperf.sock 00:20:24.592 12:17:17 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:24.592 12:17:17 -- common/autotest_common.sh@817 -- # '[' -z 67079 ']' 00:20:24.592 12:17:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.592 12:17:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.593 12:17:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.593 12:17:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:24.593 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.593 [2024-04-26 12:17:17.963954] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:24.593 [2024-04-26 12:17:17.964052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67079 ] 00:20:24.851 [2024-04-26 12:17:18.131016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.851 [2024-04-26 12:17:18.261445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.804 12:17:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:25.804 12:17:18 -- common/autotest_common.sh@850 -- # return 0 00:20:25.804 12:17:18 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:25.804 12:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.804 12:17:18 -- common/autotest_common.sh@10 -- # set +x 00:20:25.804 NVMe0n1 00:20:25.804 12:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.804 12:17:19 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:25.804 Running I/O for 10 seconds... 00:20:35.847 00:20:35.847 Latency(us) 00:20:35.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.847 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:35.847 Verification LBA range: start 0x0 length 0x4000 00:20:35.847 NVMe0n1 : 10.08 7668.84 29.96 0.00 0.00 132814.76 18111.77 97708.22 00:20:35.847 =================================================================================================================== 00:20:35.847 Total : 7668.84 29.96 0.00 0.00 132814.76 18111.77 97708.22 00:20:35.847 0 00:20:35.847 12:17:29 -- target/queue_depth.sh@39 -- # killprocess 67079 00:20:35.847 12:17:29 -- common/autotest_common.sh@936 -- # '[' -z 67079 ']' 00:20:35.847 12:17:29 -- common/autotest_common.sh@940 -- # kill -0 67079 00:20:35.847 12:17:29 -- common/autotest_common.sh@941 -- # uname 00:20:35.847 12:17:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.847 12:17:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67079 00:20:35.847 12:17:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:35.847 12:17:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:35.847 12:17:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67079' 00:20:35.847 killing process with pid 67079 00:20:35.847 12:17:29 -- common/autotest_common.sh@955 -- # kill 67079 00:20:35.847 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.847 00:20:35.847 Latency(us) 00:20:35.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.847 =================================================================================================================== 00:20:35.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.847 12:17:29 -- common/autotest_common.sh@960 -- # wait 67079 00:20:36.106 12:17:29 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:36.106 12:17:29 -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:36.106 12:17:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:36.106 12:17:29 -- nvmf/common.sh@117 -- # sync 00:20:36.366 12:17:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.366 12:17:29 -- nvmf/common.sh@120 -- # set +e 00:20:36.366 12:17:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.366 12:17:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.366 rmmod nvme_tcp 00:20:36.366 rmmod nvme_fabrics 00:20:36.366 rmmod nvme_keyring 00:20:36.366 12:17:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.366 12:17:29 -- nvmf/common.sh@124 -- # set -e 00:20:36.366 12:17:29 -- nvmf/common.sh@125 -- # return 0 00:20:36.366 12:17:29 -- nvmf/common.sh@478 -- # '[' -n 67047 ']' 00:20:36.366 12:17:29 -- nvmf/common.sh@479 -- # killprocess 67047 00:20:36.366 12:17:29 -- common/autotest_common.sh@936 -- # '[' -z 67047 ']' 00:20:36.366 12:17:29 -- common/autotest_common.sh@940 -- # kill -0 67047 00:20:36.366 12:17:29 -- common/autotest_common.sh@941 -- # uname 00:20:36.366 12:17:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.366 12:17:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67047 00:20:36.366 killing process with pid 67047 00:20:36.366 12:17:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:36.366 12:17:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:36.366 12:17:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67047' 00:20:36.366 12:17:29 -- common/autotest_common.sh@955 -- # kill 67047 00:20:36.366 12:17:29 -- common/autotest_common.sh@960 -- # wait 67047 00:20:36.625 12:17:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:36.625 12:17:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:36.625 12:17:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:36.625 12:17:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.625 12:17:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.625 12:17:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.625 12:17:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.625 12:17:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.625 12:17:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:36.625 00:20:36.625 real 0m13.722s 00:20:36.625 user 0m23.792s 00:20:36.625 sys 0m2.234s 00:20:36.625 12:17:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:36.625 ************************************ 00:20:36.625 END TEST nvmf_queue_depth 00:20:36.625 ************************************ 00:20:36.625 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:20:36.625 12:17:30 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:36.625 12:17:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:36.625 12:17:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.625 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:20:36.884 ************************************ 00:20:36.884 START TEST nvmf_multipath 00:20:36.884 ************************************ 00:20:36.884 12:17:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:36.884 * Looking for test storage... 00:20:36.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:36.884 12:17:30 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.884 12:17:30 -- nvmf/common.sh@7 -- # uname -s 00:20:36.884 12:17:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.884 12:17:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.884 12:17:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.884 12:17:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.884 12:17:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.884 12:17:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.884 12:17:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.884 12:17:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.884 12:17:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.884 12:17:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.884 12:17:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:36.884 12:17:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:36.884 12:17:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.884 12:17:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.884 12:17:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.884 12:17:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.884 12:17:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.884 12:17:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.884 12:17:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.884 12:17:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.884 12:17:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.884 12:17:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.884 12:17:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.884 12:17:30 -- paths/export.sh@5 -- # export PATH 00:20:36.884 12:17:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.884 12:17:30 -- nvmf/common.sh@47 -- # : 0 00:20:36.884 12:17:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.884 12:17:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.884 12:17:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.885 12:17:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.885 12:17:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.885 12:17:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.885 12:17:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.885 12:17:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.885 12:17:30 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.885 12:17:30 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.885 12:17:30 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:36.885 12:17:30 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:36.885 12:17:30 -- target/multipath.sh@43 -- # nvmftestinit 00:20:36.885 12:17:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:36.885 12:17:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.885 12:17:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:36.885 12:17:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:36.885 12:17:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:36.885 12:17:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.885 12:17:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.885 12:17:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.885 12:17:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:36.885 12:17:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:36.885 12:17:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:36.885 12:17:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:36.885 12:17:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:36.885 12:17:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:36.885 12:17:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.885 12:17:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.885 12:17:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:36.885 12:17:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:36.885 12:17:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.885 12:17:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.885 12:17:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.885 12:17:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.885 12:17:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.885 12:17:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.885 12:17:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.885 12:17:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.885 12:17:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:36.885 12:17:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:36.885 Cannot find device "nvmf_tgt_br" 00:20:36.885 12:17:30 -- nvmf/common.sh@155 -- # true 00:20:36.885 12:17:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.885 Cannot find device "nvmf_tgt_br2" 00:20:36.885 12:17:30 -- nvmf/common.sh@156 -- # true 00:20:36.885 12:17:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:36.885 12:17:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:36.885 Cannot find device "nvmf_tgt_br" 00:20:36.885 12:17:30 -- nvmf/common.sh@158 -- # true 00:20:36.885 12:17:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:36.885 Cannot find device "nvmf_tgt_br2" 00:20:36.885 12:17:30 -- nvmf/common.sh@159 -- # true 00:20:36.885 12:17:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:36.885 12:17:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:37.142 12:17:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.142 12:17:30 -- nvmf/common.sh@162 -- # true 00:20:37.142 12:17:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.142 12:17:30 -- nvmf/common.sh@163 -- # true 00:20:37.142 12:17:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.142 12:17:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.143 12:17:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.143 12:17:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.143 12:17:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.143 12:17:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.143 12:17:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.143 12:17:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:37.143 12:17:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:37.143 12:17:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:37.143 12:17:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:37.143 12:17:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:37.143 12:17:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:37.143 12:17:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.143 12:17:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.143 12:17:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.143 12:17:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:37.143 12:17:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:37.143 12:17:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.143 12:17:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.143 12:17:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.143 12:17:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.143 12:17:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.143 12:17:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:37.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:37.143 00:20:37.143 --- 10.0.0.2 ping statistics --- 00:20:37.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.143 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:37.143 12:17:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:37.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:37.143 00:20:37.143 --- 10.0.0.3 ping statistics --- 00:20:37.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.143 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:37.143 12:17:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:37.143 00:20:37.143 --- 10.0.0.1 ping statistics --- 00:20:37.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.143 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:37.143 12:17:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.143 12:17:30 -- nvmf/common.sh@422 -- # return 0 00:20:37.143 12:17:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:37.143 12:17:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.143 12:17:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:37.143 12:17:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:37.143 12:17:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.143 12:17:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:37.143 12:17:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:37.143 12:17:30 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:20:37.143 12:17:30 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:20:37.143 12:17:30 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:20:37.143 12:17:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:37.143 12:17:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:37.143 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:20:37.143 12:17:30 -- nvmf/common.sh@470 -- # nvmfpid=67403 00:20:37.143 12:17:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:37.143 12:17:30 -- nvmf/common.sh@471 -- # waitforlisten 67403 00:20:37.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.143 12:17:30 -- common/autotest_common.sh@817 -- # '[' -z 67403 ']' 00:20:37.143 12:17:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.143 12:17:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.143 12:17:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.143 12:17:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.143 12:17:30 -- common/autotest_common.sh@10 -- # set +x 00:20:37.401 [2024-04-26 12:17:30.644072] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:37.401 [2024-04-26 12:17:30.644158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.401 [2024-04-26 12:17:30.781975] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.659 [2024-04-26 12:17:30.895334] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.659 [2024-04-26 12:17:30.895587] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.659 [2024-04-26 12:17:30.895723] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.659 [2024-04-26 12:17:30.895779] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.659 [2024-04-26 12:17:30.895879] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.659 [2024-04-26 12:17:30.896094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.659 [2024-04-26 12:17:30.896396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.659 [2024-04-26 12:17:30.896497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.659 [2024-04-26 12:17:30.896501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.224 12:17:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.224 12:17:31 -- common/autotest_common.sh@850 -- # return 0 00:20:38.224 12:17:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:38.225 12:17:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:38.225 12:17:31 -- common/autotest_common.sh@10 -- # set +x 00:20:38.482 12:17:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.482 12:17:31 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:38.740 [2024-04-26 12:17:31.969009] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.740 12:17:32 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:38.998 Malloc0 00:20:38.998 12:17:32 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:20:39.255 12:17:32 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.514 12:17:32 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.771 [2024-04-26 12:17:33.033627] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.771 12:17:33 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.029 [2024-04-26 12:17:33.297918] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.029 12:17:33 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:20:40.029 12:17:33 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:20:40.287 12:17:33 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:20:40.288 12:17:33 -- common/autotest_common.sh@1184 -- # local i=0 00:20:40.288 12:17:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:40.288 12:17:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:40.288 12:17:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:42.186 12:17:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:42.186 12:17:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:42.186 12:17:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:42.186 12:17:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:42.186 12:17:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:42.186 12:17:35 -- common/autotest_common.sh@1194 -- # return 0 00:20:42.186 12:17:35 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:20:42.186 12:17:35 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:20:42.186 12:17:35 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:20:42.186 12:17:35 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:20:42.186 12:17:35 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:20:42.186 12:17:35 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:20:42.186 12:17:35 -- target/multipath.sh@38 -- # return 0 00:20:42.186 12:17:35 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:20:42.186 12:17:35 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:20:42.186 12:17:35 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:20:42.187 12:17:35 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:20:42.187 12:17:35 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:20:42.187 12:17:35 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:20:42.187 12:17:35 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:20:42.187 12:17:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:42.187 12:17:35 -- target/multipath.sh@22 -- # local timeout=20 00:20:42.187 12:17:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:42.187 12:17:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:42.187 12:17:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:42.187 12:17:35 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:20:42.187 12:17:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:42.187 12:17:35 -- target/multipath.sh@22 -- # local timeout=20 00:20:42.187 12:17:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:42.187 12:17:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:42.187 12:17:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:42.187 12:17:35 -- target/multipath.sh@85 -- # echo numa 00:20:42.187 12:17:35 -- target/multipath.sh@88 -- # fio_pid=67493 00:20:42.187 12:17:35 -- target/multipath.sh@90 -- # sleep 1 00:20:42.187 12:17:35 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:42.187 [global] 00:20:42.187 thread=1 00:20:42.187 invalidate=1 00:20:42.187 rw=randrw 00:20:42.187 time_based=1 00:20:42.187 runtime=6 00:20:42.187 ioengine=libaio 00:20:42.187 direct=1 00:20:42.187 bs=4096 00:20:42.187 iodepth=128 00:20:42.187 norandommap=0 00:20:42.187 numjobs=1 00:20:42.187 00:20:42.187 verify_dump=1 00:20:42.187 verify_backlog=512 00:20:42.187 verify_state_save=0 00:20:42.187 do_verify=1 00:20:42.187 verify=crc32c-intel 00:20:42.187 [job0] 00:20:42.187 filename=/dev/nvme0n1 00:20:42.446 Could not set queue depth (nvme0n1) 00:20:42.446 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:42.446 fio-3.35 00:20:42.446 Starting 1 thread 00:20:43.381 12:17:36 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:43.639 12:17:36 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:43.897 12:17:37 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:20:43.897 12:17:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:43.897 12:17:37 -- target/multipath.sh@22 -- # local timeout=20 00:20:43.897 12:17:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:43.897 12:17:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:43.897 12:17:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:43.897 12:17:37 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:20:43.897 12:17:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:43.897 12:17:37 -- target/multipath.sh@22 -- # local timeout=20 00:20:43.897 12:17:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:43.897 12:17:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:43.897 12:17:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:43.897 12:17:37 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:44.156 12:17:37 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:44.414 12:17:37 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:20:44.414 12:17:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:44.414 12:17:37 -- target/multipath.sh@22 -- # local timeout=20 00:20:44.414 12:17:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:44.414 12:17:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:44.414 12:17:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:44.414 12:17:37 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:20:44.414 12:17:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:44.414 12:17:37 -- target/multipath.sh@22 -- # local timeout=20 00:20:44.414 12:17:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:44.414 12:17:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:44.414 12:17:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:44.414 12:17:37 -- target/multipath.sh@104 -- # wait 67493 00:20:48.600 00:20:48.600 job0: (groupid=0, jobs=1): err= 0: pid=67520: Fri Apr 26 12:17:41 2024 00:20:48.600 read: IOPS=10.0k, BW=39.2MiB/s (41.2MB/s)(236MiB/6006msec) 00:20:48.600 slat (usec): min=5, max=6153, avg=57.69, stdev=233.76 00:20:48.600 clat (usec): min=1521, max=15398, avg=8604.75, stdev=1530.17 00:20:48.600 lat (usec): min=1531, max=15408, avg=8662.44, stdev=1535.25 00:20:48.600 clat percentiles (usec): 00:20:48.600 | 1.00th=[ 4359], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 7832], 00:20:48.600 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:20:48.600 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[12256], 00:20:48.600 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14222], 99.95th=[14484], 00:20:48.600 | 99.99th=[14615] 00:20:48.600 bw ( KiB/s): min=10368, max=25232, per=52.46%, avg=21083.00, stdev=5268.02, samples=11 00:20:48.600 iops : min= 2592, max= 6308, avg=5270.73, stdev=1316.99, samples=11 00:20:48.600 write: IOPS=5995, BW=23.4MiB/s (24.6MB/s)(127MiB/5413msec); 0 zone resets 00:20:48.600 slat (usec): min=14, max=2780, avg=67.18, stdev=161.98 00:20:48.600 clat (usec): min=1495, max=14538, avg=7475.45, stdev=1302.05 00:20:48.600 lat (usec): min=1549, max=14579, avg=7542.62, stdev=1305.95 00:20:48.600 clat percentiles (usec): 00:20:48.600 | 1.00th=[ 3490], 5.00th=[ 4424], 10.00th=[ 6128], 20.00th=[ 6980], 00:20:48.600 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:20:48.600 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8848], 00:20:48.600 | 99.00th=[11600], 99.50th=[12256], 99.90th=[13435], 99.95th=[13829], 00:20:48.600 | 99.99th=[14222] 00:20:48.600 bw ( KiB/s): min=10520, max=24744, per=88.14%, avg=21138.00, stdev=5116.64, samples=11 00:20:48.600 iops : min= 2630, max= 6186, avg=5284.45, stdev=1279.13, samples=11 00:20:48.600 lat (msec) : 2=0.03%, 4=1.34%, 10=92.06%, 20=6.56% 00:20:48.600 cpu : usr=6.36%, sys=21.56%, ctx=5347, majf=0, minf=84 00:20:48.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:48.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.600 issued rwts: total=60344,32455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.600 00:20:48.600 Run status group 0 (all jobs): 00:20:48.600 READ: bw=39.2MiB/s (41.2MB/s), 39.2MiB/s-39.2MiB/s (41.2MB/s-41.2MB/s), io=236MiB (247MB), run=6006-6006msec 00:20:48.600 WRITE: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=127MiB (133MB), run=5413-5413msec 00:20:48.600 00:20:48.600 Disk stats (read/write): 00:20:48.600 nvme0n1: ios=59720/31585, merge=0/0, ticks=492555/221808, in_queue=714363, util=98.65% 00:20:48.600 12:17:41 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:48.858 12:17:42 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:49.117 12:17:42 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:20:49.117 12:17:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:49.117 12:17:42 -- target/multipath.sh@22 -- # local timeout=20 00:20:49.117 12:17:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:49.117 12:17:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:49.117 12:17:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:49.117 12:17:42 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:20:49.117 12:17:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:49.117 12:17:42 -- target/multipath.sh@22 -- # local timeout=20 00:20:49.117 12:17:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:49.117 12:17:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:49.117 12:17:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:49.117 12:17:42 -- target/multipath.sh@113 -- # echo round-robin 00:20:49.117 12:17:42 -- target/multipath.sh@116 -- # fio_pid=67599 00:20:49.117 12:17:42 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:49.117 12:17:42 -- target/multipath.sh@118 -- # sleep 1 00:20:49.117 [global] 00:20:49.117 thread=1 00:20:49.117 invalidate=1 00:20:49.117 rw=randrw 00:20:49.117 time_based=1 00:20:49.117 runtime=6 00:20:49.117 ioengine=libaio 00:20:49.117 direct=1 00:20:49.117 bs=4096 00:20:49.117 iodepth=128 00:20:49.117 norandommap=0 00:20:49.117 numjobs=1 00:20:49.117 00:20:49.117 verify_dump=1 00:20:49.117 verify_backlog=512 00:20:49.117 verify_state_save=0 00:20:49.117 do_verify=1 00:20:49.117 verify=crc32c-intel 00:20:49.117 [job0] 00:20:49.117 filename=/dev/nvme0n1 00:20:49.117 Could not set queue depth (nvme0n1) 00:20:49.117 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.117 fio-3.35 00:20:49.117 Starting 1 thread 00:20:50.052 12:17:43 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:50.339 12:17:43 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:50.597 12:17:43 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:20:50.597 12:17:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:50.597 12:17:43 -- target/multipath.sh@22 -- # local timeout=20 00:20:50.597 12:17:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:50.597 12:17:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:50.597 12:17:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:50.597 12:17:43 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:20:50.597 12:17:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:50.597 12:17:43 -- target/multipath.sh@22 -- # local timeout=20 00:20:50.597 12:17:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:50.597 12:17:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:50.597 12:17:43 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:50.597 12:17:43 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:50.856 12:17:44 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:51.115 12:17:44 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:20:51.115 12:17:44 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:51.115 12:17:44 -- target/multipath.sh@22 -- # local timeout=20 00:20:51.115 12:17:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:51.115 12:17:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:51.115 12:17:44 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:51.115 12:17:44 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:20:51.115 12:17:44 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:51.115 12:17:44 -- target/multipath.sh@22 -- # local timeout=20 00:20:51.115 12:17:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:51.115 12:17:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:51.115 12:17:44 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:51.115 12:17:44 -- target/multipath.sh@132 -- # wait 67599 00:20:55.366 00:20:55.366 job0: (groupid=0, jobs=1): err= 0: pid=67621: Fri Apr 26 12:17:48 2024 00:20:55.366 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(261MiB/6007msec) 00:20:55.366 slat (usec): min=4, max=7543, avg=44.07, stdev=195.05 00:20:55.366 clat (usec): min=294, max=15521, avg=7856.59, stdev=2074.40 00:20:55.366 lat (usec): min=315, max=15557, avg=7900.66, stdev=2089.58 00:20:55.366 clat percentiles (usec): 00:20:55.366 | 1.00th=[ 2868], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 6063], 00:20:55.366 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8455], 00:20:55.366 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[11863], 00:20:55.366 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14484], 99.95th=[14615], 00:20:55.366 | 99.99th=[15008] 00:20:55.366 bw ( KiB/s): min=13960, max=38328, per=53.06%, avg=23565.91, stdev=7196.50, samples=11 00:20:55.366 iops : min= 3490, max= 9582, avg=5891.36, stdev=1799.00, samples=11 00:20:55.366 write: IOPS=6467, BW=25.3MiB/s (26.5MB/s)(139MiB/5520msec); 0 zone resets 00:20:55.366 slat (usec): min=7, max=2727, avg=58.01, stdev=135.71 00:20:55.366 clat (usec): min=1469, max=14758, avg=6684.44, stdev=1836.95 00:20:55.366 lat (usec): min=1493, max=14788, avg=6742.45, stdev=1852.31 00:20:55.366 clat percentiles (usec): 00:20:55.366 | 1.00th=[ 2769], 5.00th=[ 3490], 10.00th=[ 3982], 20.00th=[ 4686], 00:20:55.366 | 30.00th=[ 5473], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7635], 00:20:55.366 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8848], 00:20:55.366 | 99.00th=[11338], 99.50th=[12125], 99.90th=[13042], 99.95th=[13435], 00:20:55.366 | 99.99th=[13960] 00:20:55.366 bw ( KiB/s): min=14544, max=37608, per=91.08%, avg=23561.36, stdev=7039.43, samples=11 00:20:55.366 iops : min= 3636, max= 9402, avg=5890.27, stdev=1759.81, samples=11 00:20:55.366 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:20:55.366 lat (msec) : 2=0.20%, 4=5.64%, 10=88.64%, 20=5.47% 00:20:55.366 cpu : usr=6.09%, sys=25.86%, ctx=5856, majf=0, minf=76 00:20:55.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:55.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.366 issued rwts: total=66696,35699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.366 00:20:55.366 Run status group 0 (all jobs): 00:20:55.366 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=261MiB (273MB), run=6007-6007msec 00:20:55.366 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=139MiB (146MB), run=5520-5520msec 00:20:55.366 00:20:55.366 Disk stats (read/write): 00:20:55.366 nvme0n1: ios=66119/34880, merge=0/0, ticks=493111/215157, in_queue=708268, util=98.58% 00:20:55.366 12:17:48 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:55.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:55.366 12:17:48 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:55.366 12:17:48 -- common/autotest_common.sh@1205 -- # local i=0 00:20:55.366 12:17:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:55.366 12:17:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:55.366 12:17:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:55.366 12:17:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:55.366 12:17:48 -- common/autotest_common.sh@1217 -- # return 0 00:20:55.366 12:17:48 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.933 12:17:49 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:20:55.933 12:17:49 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:20:55.933 12:17:49 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:55.933 12:17:49 -- target/multipath.sh@144 -- # nvmftestfini 00:20:55.933 12:17:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:55.933 12:17:49 -- nvmf/common.sh@117 -- # sync 00:20:55.933 12:17:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.933 12:17:49 -- nvmf/common.sh@120 -- # set +e 00:20:55.933 12:17:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.933 12:17:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.933 rmmod nvme_tcp 00:20:55.933 rmmod nvme_fabrics 00:20:55.933 rmmod nvme_keyring 00:20:55.933 12:17:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.933 12:17:49 -- nvmf/common.sh@124 -- # set -e 00:20:55.933 12:17:49 -- nvmf/common.sh@125 -- # return 0 00:20:55.933 12:17:49 -- nvmf/common.sh@478 -- # '[' -n 67403 ']' 00:20:55.933 12:17:49 -- nvmf/common.sh@479 -- # killprocess 67403 00:20:55.933 12:17:49 -- common/autotest_common.sh@936 -- # '[' -z 67403 ']' 00:20:55.934 12:17:49 -- common/autotest_common.sh@940 -- # kill -0 67403 00:20:55.934 12:17:49 -- common/autotest_common.sh@941 -- # uname 00:20:55.934 12:17:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.934 12:17:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67403 00:20:55.934 killing process with pid 67403 00:20:55.934 12:17:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:55.934 12:17:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:55.934 12:17:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67403' 00:20:55.934 12:17:49 -- common/autotest_common.sh@955 -- # kill 67403 00:20:55.934 12:17:49 -- common/autotest_common.sh@960 -- # wait 67403 00:20:56.193 12:17:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:56.193 12:17:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:56.193 12:17:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:56.193 12:17:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.193 12:17:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.193 12:17:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.193 12:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.193 12:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.193 12:17:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:56.193 00:20:56.193 real 0m19.460s 00:20:56.193 user 1m13.181s 00:20:56.193 sys 0m9.718s 00:20:56.193 12:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:56.193 12:17:49 -- common/autotest_common.sh@10 -- # set +x 00:20:56.193 ************************************ 00:20:56.193 END TEST nvmf_multipath 00:20:56.193 ************************************ 00:20:56.193 12:17:49 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:56.193 12:17:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:56.193 12:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:56.193 12:17:49 -- common/autotest_common.sh@10 -- # set +x 00:20:56.452 ************************************ 00:20:56.452 START TEST nvmf_zcopy 00:20:56.452 ************************************ 00:20:56.452 12:17:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:56.452 * Looking for test storage... 00:20:56.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:56.452 12:17:49 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.452 12:17:49 -- nvmf/common.sh@7 -- # uname -s 00:20:56.452 12:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.452 12:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.452 12:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.452 12:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.452 12:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.452 12:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.452 12:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.452 12:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.452 12:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.452 12:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.452 12:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:56.452 12:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:20:56.452 12:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.452 12:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.452 12:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.452 12:17:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.452 12:17:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.452 12:17:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.452 12:17:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.452 12:17:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.452 12:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.452 12:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.452 12:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.452 12:17:49 -- paths/export.sh@5 -- # export PATH 00:20:56.452 12:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.452 12:17:49 -- nvmf/common.sh@47 -- # : 0 00:20:56.452 12:17:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.452 12:17:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.452 12:17:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.452 12:17:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.452 12:17:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.452 12:17:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.452 12:17:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.452 12:17:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.452 12:17:49 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:56.452 12:17:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:56.452 12:17:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.452 12:17:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:56.452 12:17:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:56.452 12:17:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:56.452 12:17:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.452 12:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.452 12:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.452 12:17:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:56.452 12:17:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:56.452 12:17:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:56.452 12:17:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:56.452 12:17:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:56.452 12:17:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:56.452 12:17:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.452 12:17:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.452 12:17:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:56.452 12:17:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:56.452 12:17:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:56.452 12:17:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:56.452 12:17:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:56.452 12:17:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.452 12:17:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:56.452 12:17:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:56.452 12:17:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:56.452 12:17:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:56.452 12:17:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:56.452 12:17:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:56.452 Cannot find device "nvmf_tgt_br" 00:20:56.452 12:17:49 -- nvmf/common.sh@155 -- # true 00:20:56.452 12:17:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.452 Cannot find device "nvmf_tgt_br2" 00:20:56.452 12:17:49 -- nvmf/common.sh@156 -- # true 00:20:56.452 12:17:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:56.452 12:17:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:56.452 Cannot find device "nvmf_tgt_br" 00:20:56.452 12:17:49 -- nvmf/common.sh@158 -- # true 00:20:56.452 12:17:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:56.452 Cannot find device "nvmf_tgt_br2" 00:20:56.452 12:17:49 -- nvmf/common.sh@159 -- # true 00:20:56.452 12:17:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:56.710 12:17:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:56.710 12:17:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.710 12:17:49 -- nvmf/common.sh@162 -- # true 00:20:56.710 12:17:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.710 12:17:49 -- nvmf/common.sh@163 -- # true 00:20:56.710 12:17:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:56.710 12:17:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:56.710 12:17:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.710 12:17:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.710 12:17:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.710 12:17:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:56.710 12:17:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:56.710 12:17:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:56.710 12:17:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:56.710 12:17:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:56.710 12:17:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:56.710 12:17:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:56.710 12:17:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:56.710 12:17:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:56.710 12:17:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:56.710 12:17:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:56.710 12:17:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:56.710 12:17:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:56.710 12:17:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:56.710 12:17:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:56.710 12:17:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:56.710 12:17:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:56.710 12:17:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.710 12:17:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:56.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:20:56.710 00:20:56.710 --- 10.0.0.2 ping statistics --- 00:20:56.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.710 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:56.710 12:17:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:56.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:20:56.710 00:20:56.710 --- 10.0.0.3 ping statistics --- 00:20:56.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.710 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:56.710 12:17:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:56.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:56.710 00:20:56.710 --- 10.0.0.1 ping statistics --- 00:20:56.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.710 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:56.710 12:17:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.710 12:17:50 -- nvmf/common.sh@422 -- # return 0 00:20:56.710 12:17:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:56.711 12:17:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.711 12:17:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:56.711 12:17:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:56.711 12:17:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.711 12:17:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:56.711 12:17:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:56.711 12:17:50 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:56.711 12:17:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:56.711 12:17:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.711 12:17:50 -- common/autotest_common.sh@10 -- # set +x 00:20:56.711 12:17:50 -- nvmf/common.sh@470 -- # nvmfpid=67883 00:20:56.711 12:17:50 -- nvmf/common.sh@471 -- # waitforlisten 67883 00:20:56.711 12:17:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.711 12:17:50 -- common/autotest_common.sh@817 -- # '[' -z 67883 ']' 00:20:56.711 12:17:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.711 12:17:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.969 12:17:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.969 12:17:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.969 12:17:50 -- common/autotest_common.sh@10 -- # set +x 00:20:56.969 [2024-04-26 12:17:50.233742] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:56.969 [2024-04-26 12:17:50.233865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.969 [2024-04-26 12:17:50.380506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.228 [2024-04-26 12:17:50.512928] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.228 [2024-04-26 12:17:50.512998] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.228 [2024-04-26 12:17:50.513012] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.228 [2024-04-26 12:17:50.513023] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.228 [2024-04-26 12:17:50.513033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.228 [2024-04-26 12:17:50.513076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.795 12:17:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.795 12:17:51 -- common/autotest_common.sh@850 -- # return 0 00:20:57.795 12:17:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:57.795 12:17:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 12:17:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.795 12:17:51 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:57.795 12:17:51 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:57.795 12:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 [2024-04-26 12:17:51.184313] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.795 12:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.795 12:17:51 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:57.795 12:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 12:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.795 12:17:51 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.795 12:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 [2024-04-26 12:17:51.200443] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.795 12:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.795 12:17:51 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:57.795 12:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 12:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.795 12:17:51 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:57.795 12:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 malloc0 00:20:57.795 12:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.795 12:17:51 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.795 12:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.795 12:17:51 -- common/autotest_common.sh@10 -- # set +x 00:20:57.795 12:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.795 12:17:51 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:57.795 12:17:51 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:57.795 12:17:51 -- nvmf/common.sh@521 -- # config=() 00:20:57.795 12:17:51 -- nvmf/common.sh@521 -- # local subsystem config 00:20:57.795 12:17:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:57.795 12:17:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:57.795 { 00:20:57.795 "params": { 00:20:57.795 "name": "Nvme$subsystem", 00:20:57.795 "trtype": "$TEST_TRANSPORT", 00:20:57.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.795 "adrfam": "ipv4", 00:20:57.795 "trsvcid": "$NVMF_PORT", 00:20:57.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.795 "hdgst": ${hdgst:-false}, 00:20:57.795 "ddgst": ${ddgst:-false} 00:20:57.795 }, 00:20:57.795 "method": "bdev_nvme_attach_controller" 00:20:57.795 } 00:20:57.795 EOF 00:20:57.795 )") 00:20:57.795 12:17:51 -- nvmf/common.sh@543 -- # cat 00:20:57.795 12:17:51 -- nvmf/common.sh@545 -- # jq . 00:20:57.795 12:17:51 -- nvmf/common.sh@546 -- # IFS=, 00:20:57.795 12:17:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:57.795 "params": { 00:20:57.795 "name": "Nvme1", 00:20:57.795 "trtype": "tcp", 00:20:57.795 "traddr": "10.0.0.2", 00:20:57.795 "adrfam": "ipv4", 00:20:57.795 "trsvcid": "4420", 00:20:57.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.795 "hdgst": false, 00:20:57.795 "ddgst": false 00:20:57.795 }, 00:20:57.795 "method": "bdev_nvme_attach_controller" 00:20:57.795 }' 00:20:58.054 [2024-04-26 12:17:51.293291] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:20:58.054 [2024-04-26 12:17:51.293385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67916 ] 00:20:58.054 [2024-04-26 12:17:51.435205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.313 [2024-04-26 12:17:51.564657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.313 Running I/O for 10 seconds... 00:21:08.306 00:21:08.306 Latency(us) 00:21:08.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.306 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:21:08.306 Verification LBA range: start 0x0 length 0x1000 00:21:08.306 Nvme1n1 : 10.01 5759.51 45.00 0.00 0.00 22153.54 394.71 34793.66 00:21:08.306 =================================================================================================================== 00:21:08.306 Total : 5759.51 45.00 0.00 0.00 22153.54 394.71 34793.66 00:21:08.564 12:18:02 -- target/zcopy.sh@39 -- # perfpid=68028 00:21:08.564 12:18:02 -- target/zcopy.sh@41 -- # xtrace_disable 00:21:08.564 12:18:02 -- common/autotest_common.sh@10 -- # set +x 00:21:08.564 12:18:02 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:21:08.564 12:18:02 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:21:08.822 12:18:02 -- nvmf/common.sh@521 -- # config=() 00:21:08.822 12:18:02 -- nvmf/common.sh@521 -- # local subsystem config 00:21:08.822 12:18:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.822 12:18:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.822 { 00:21:08.822 "params": { 00:21:08.822 "name": "Nvme$subsystem", 00:21:08.822 "trtype": "$TEST_TRANSPORT", 00:21:08.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.822 "adrfam": "ipv4", 00:21:08.822 "trsvcid": "$NVMF_PORT", 00:21:08.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.822 "hdgst": ${hdgst:-false}, 00:21:08.822 "ddgst": ${ddgst:-false} 00:21:08.822 }, 00:21:08.822 "method": "bdev_nvme_attach_controller" 00:21:08.822 } 00:21:08.822 EOF 00:21:08.822 )") 00:21:08.822 12:18:02 -- nvmf/common.sh@543 -- # cat 00:21:08.822 [2024-04-26 12:18:02.037391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.822 [2024-04-26 12:18:02.037567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.822 12:18:02 -- nvmf/common.sh@545 -- # jq . 00:21:08.822 12:18:02 -- nvmf/common.sh@546 -- # IFS=, 00:21:08.822 12:18:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:08.822 "params": { 00:21:08.822 "name": "Nvme1", 00:21:08.822 "trtype": "tcp", 00:21:08.822 "traddr": "10.0.0.2", 00:21:08.822 "adrfam": "ipv4", 00:21:08.822 "trsvcid": "4420", 00:21:08.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.822 "hdgst": false, 00:21:08.822 "ddgst": false 00:21:08.822 }, 00:21:08.822 "method": "bdev_nvme_attach_controller" 00:21:08.822 }' 00:21:08.822 [2024-04-26 12:18:02.049355] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.822 [2024-04-26 12:18:02.049387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.822 [2024-04-26 12:18:02.061353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.822 [2024-04-26 12:18:02.061382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.822 [2024-04-26 12:18:02.073353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.073381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.085356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.085385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.089464] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:08.823 [2024-04-26 12:18:02.089544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68028 ] 00:21:08.823 [2024-04-26 12:18:02.097357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.097384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.109380] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.109408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.121365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.121389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.133383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.133407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.149385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.149408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.161402] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.161427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.173432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.173596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.185400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.185532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.193421] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.193554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.205409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.205535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.217414] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.217542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.229418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.229546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.234290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.823 [2024-04-26 12:18:02.241434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.241466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.253433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.253465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.265432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.265460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.277444] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.277480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:08.823 [2024-04-26 12:18:02.289438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:08.823 [2024-04-26 12:18:02.289468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.301429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.301456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.313451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.313484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.325438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.325464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.337444] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.337470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.349450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.349478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.361475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.361530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.365545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.082 [2024-04-26 12:18:02.373454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.373490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.385486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.385520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.397482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.397519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.409503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.409546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.421500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.421541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.433494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.433531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.445516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.445572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.457502] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.457553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.469492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.469520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.481517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.481551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.493545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.493589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.505533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.505567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.517535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.517567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.529551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.529582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.082 [2024-04-26 12:18:02.541606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.082 [2024-04-26 12:18:02.541641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 Running I/O for 5 seconds... 00:21:09.340 [2024-04-26 12:18:02.553573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.553603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.572178] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.572259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.588011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.588050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.598161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.598232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.613013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.613049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.628624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.628660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.637843] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.637891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.654725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.654762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.673191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.340 [2024-04-26 12:18:02.673269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.340 [2024-04-26 12:18:02.688409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.688446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.697925] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.697961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.714573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.714608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.730440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.730478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.748545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.748587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.764850] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.764889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.781440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.781477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.341 [2024-04-26 12:18:02.797522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.341 [2024-04-26 12:18:02.797559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.815691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.815752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.830497] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.830543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.846574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.846610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.862936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.862988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.881425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.881463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.896230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.896279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.911868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.911910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.930462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.930496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.945481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.945516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.961386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.961422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.979813] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.979847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:02.994983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:02.995016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:03.013220] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:03.013285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:03.029183] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:03.029247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:03.039278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:03.039314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:03.054383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:03.054419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.600 [2024-04-26 12:18:03.065583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.600 [2024-04-26 12:18:03.065622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.858 [2024-04-26 12:18:03.080798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.858 [2024-04-26 12:18:03.080834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.858 [2024-04-26 12:18:03.096476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.858 [2024-04-26 12:18:03.096512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.858 [2024-04-26 12:18:03.114993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.858 [2024-04-26 12:18:03.115028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.858 [2024-04-26 12:18:03.130256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.858 [2024-04-26 12:18:03.130321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.858 [2024-04-26 12:18:03.148633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.858 [2024-04-26 12:18:03.148666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.163847] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.163881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.174145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.174209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.188744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.188777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.203743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.203781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.218913] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.218946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.227833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.227865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.244017] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.244054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.254357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.254393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.266410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.266445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.281970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.282005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.298333] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.298369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:09.859 [2024-04-26 12:18:03.316486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:09.859 [2024-04-26 12:18:03.316523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.331209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.331272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.349034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.349079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.364750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.364784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.382193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.382256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.398470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.398514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.416856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.416906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.432098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.432151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.442266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.442331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.458216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.458300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.475715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.475768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.493733] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.493771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.508366] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.508401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.524373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.524409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.541767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.541842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.558006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.558053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.125 [2024-04-26 12:18:03.574809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.125 [2024-04-26 12:18:03.574847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.590036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.590074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.599795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.599831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.616246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.616282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.632963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.633002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.649556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.649605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.666655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.666695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.682961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.683021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.700739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.700783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.715413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.715463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.731173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.731230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.747961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.748010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.763476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.763512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.772665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.772700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.788832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.788869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.805157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.805208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.821443] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.821479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.839909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.839950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.854803] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.854837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.402 [2024-04-26 12:18:03.864811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.402 [2024-04-26 12:18:03.864847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.881845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.881883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.896996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.897029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.912916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.912951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.929123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.929215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.946024] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.946076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.962832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.962894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.979095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.979164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:03.997375] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:03.997417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:04.013388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:04.013424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:04.031814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.666 [2024-04-26 12:18:04.031854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.666 [2024-04-26 12:18:04.046466] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.667 [2024-04-26 12:18:04.046503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.667 [2024-04-26 12:18:04.062538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.667 [2024-04-26 12:18:04.062572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.667 [2024-04-26 12:18:04.079806] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.667 [2024-04-26 12:18:04.079853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.667 [2024-04-26 12:18:04.095799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.667 [2024-04-26 12:18:04.095851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.667 [2024-04-26 12:18:04.112454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.667 [2024-04-26 12:18:04.112498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.667 [2024-04-26 12:18:04.130727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.667 [2024-04-26 12:18:04.130765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.144167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.144241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.161732] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.161773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.175065] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.175105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.192350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.192394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.207385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.207432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.216961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.217011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.232736] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.232774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.249483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.249522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.265249] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.265287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.274888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.274935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.291159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.291244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.307771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.307809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.324500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.324536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.340794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.340831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.358250] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.358298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.374108] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.374151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:10.925 [2024-04-26 12:18:04.383742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:10.925 [2024-04-26 12:18:04.383778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.399406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.399441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.416449] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.416485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.434386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.434422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.449458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.449494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.459164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.459213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.475898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.475936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.492527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.492575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.508255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.508298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.517750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.517786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.533849] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.533884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.551278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.551324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.566524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.566605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.576409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.576443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.591762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.591797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.608832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.608874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.625387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.625421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.183 [2024-04-26 12:18:04.641611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.183 [2024-04-26 12:18:04.641644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.657748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.657781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.677026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.677081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.692061] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.692108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.708567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.708602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.725591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.725627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.741592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.741634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.758919] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.758953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.775612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.775651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.792151] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.792230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.809862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.809895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.825564] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.825598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.844448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.844484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.859351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.859386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.875546] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.875584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.893344] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.893380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.442 [2024-04-26 12:18:04.908236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.442 [2024-04-26 12:18:04.908291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:04.924439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:04.924490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:04.942954] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:04.942991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:04.957101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:04.957135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:04.973387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:04.973422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:04.990670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:04.990705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:05.006962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:05.006998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:05.024056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:05.024102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.700 [2024-04-26 12:18:05.040506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.700 [2024-04-26 12:18:05.040559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.057142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.057241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.073321] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.073357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.090271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.090339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.106406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.106448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.124494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.124535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.138038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.138089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.701 [2024-04-26 12:18:05.154040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.701 [2024-04-26 12:18:05.154074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.170936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.170971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.187980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.188015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.204467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.204512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.221631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.221667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.237715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.237753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.255892] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.255929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.270827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.270861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.280527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.280563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.295527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.295581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.310664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.310712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.320397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.320431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.337898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.337932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.354377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.354411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.371769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.371818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.387045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.387098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.403616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.403652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:11.959 [2024-04-26 12:18:05.422362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:11.959 [2024-04-26 12:18:05.422397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.437805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.437874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.454856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.454910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.472292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.472333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.487636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.487885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.504156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.504444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.520665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.520880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.537405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.537553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.553354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.553525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.563016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.563050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.579025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.579060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.595230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.595287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.614162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.614212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.629360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.629393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.644937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.644969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.654285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.654318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.670297] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.670347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.218 [2024-04-26 12:18:05.679861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.218 [2024-04-26 12:18:05.679895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.695798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.695833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.712717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.712756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.729766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.729803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.745498] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.745531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.762466] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.762503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.778929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.778964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.795454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.477 [2024-04-26 12:18:05.795489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.477 [2024-04-26 12:18:05.811497] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.811534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.829810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.829857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.843583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.843622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.859558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.859609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.878098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.878183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.893634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.893676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.909900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.909937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.928127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.928185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.478 [2024-04-26 12:18:05.943086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.478 [2024-04-26 12:18:05.943132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:05.952685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:05.952721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:05.969364] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:05.969426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:05.984482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:05.984547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.000895] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.000955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.017686] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.017737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.033857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.033895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.053359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.053412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.068008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.068059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.084222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.084287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.102534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.102571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.117882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.117918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.135647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.135717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.150756] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.150823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.166980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.167019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.184487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.184523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.736 [2024-04-26 12:18:06.199457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.736 [2024-04-26 12:18:06.199495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.209362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.209397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.224368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.224403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.241514] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.241560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.257704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.257740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.275474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.275513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.290315] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.290350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.302387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.302426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.318470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.318520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.334241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.334293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.343861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.343900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.360306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.040 [2024-04-26 12:18:06.360343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.040 [2024-04-26 12:18:06.377536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.377575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.395118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.395154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.409551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.409587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.418976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.419012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.430966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.431003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.441859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.441897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.460185] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.460234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.474658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.474694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.041 [2024-04-26 12:18:06.489802] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.041 [2024-04-26 12:18:06.489839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.499558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.499594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.516102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.516140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.533460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.533502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.547937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.547991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.563835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.563872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.581384] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.581420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.596660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.596696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.606029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.606065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.622234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.622285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.638447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.638486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.656298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.656337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.672575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.672612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.691080] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.691117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.706399] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.706449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.716242] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.716274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.732330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.732364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.742630] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.742663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.300 [2024-04-26 12:18:06.758435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.300 [2024-04-26 12:18:06.758473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.772929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.772971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.787972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.788024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.804938] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.804974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.821762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.821817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.837938] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.837991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.855890] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.855928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.870439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.870472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.885100] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.885134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.902274] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.902309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.917936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.917972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.927150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.927196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.944009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.944064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.961584] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.961640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.976571] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.976608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:06.986260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:06.986294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:07.002785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:07.002822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:07.012700] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:07.012736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.558 [2024-04-26 12:18:07.026007] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.558 [2024-04-26 12:18:07.026043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.041011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.041045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.050885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.050922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.066656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.066695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.082727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.082763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.100797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.100833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.115952] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.115991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.126189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.126236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.141526] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.141562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.158311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.158351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.175278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.175317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.191740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.816 [2024-04-26 12:18:07.191785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.816 [2024-04-26 12:18:07.208274] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.817 [2024-04-26 12:18:07.208324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.817 [2024-04-26 12:18:07.224703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.817 [2024-04-26 12:18:07.224739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.817 [2024-04-26 12:18:07.242800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.817 [2024-04-26 12:18:07.242838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.817 [2024-04-26 12:18:07.257442] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.817 [2024-04-26 12:18:07.257479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.817 [2024-04-26 12:18:07.272765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.817 [2024-04-26 12:18:07.272805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.817 [2024-04-26 12:18:07.282683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.817 [2024-04-26 12:18:07.282718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.297675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.297722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.313846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.313898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.331254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.331287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.345981] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.346150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.362121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.362315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.380334] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.380489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.391521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.391668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.404607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.404857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.422519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.422554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.437317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.437353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.446636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.446672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.458279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.458315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.469128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.469165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.486271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.486311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.503155] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.503211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.519259] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.519293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.075 [2024-04-26 12:18:07.529060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.075 [2024-04-26 12:18:07.529097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.543674] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.543730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.558210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.558246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 00:21:14.333 Latency(us) 00:21:14.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.333 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:14.333 Nvme1n1 : 5.01 11427.46 89.28 0.00 0.00 11187.11 4915.20 24784.52 00:21:14.333 =================================================================================================================== 00:21:14.333 Total : 11427.46 89.28 0.00 0.00 11187.11 4915.20 24784.52 00:21:14.333 [2024-04-26 12:18:07.568320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.568475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.576310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.576453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.584306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.584443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.592317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.592483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.604350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.604591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.616350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.616390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.628364] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.628405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.640355] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.640396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.652363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.652405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.664366] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.664405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.676369] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.676414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.688381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.688426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.700377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.700417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.712370] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.712406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.724374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.724406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.736387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.736432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.748395] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.748439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.760373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.760406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.768381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.768411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.780379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.780410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.788376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.788407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.333 [2024-04-26 12:18:07.800438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.333 [2024-04-26 12:18:07.800498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.591 [2024-04-26 12:18:07.808394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.591 [2024-04-26 12:18:07.808430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.591 [2024-04-26 12:18:07.816393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.591 [2024-04-26 12:18:07.816422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.591 [2024-04-26 12:18:07.828392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.591 [2024-04-26 12:18:07.828433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.591 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68028) - No such process 00:21:14.591 12:18:07 -- target/zcopy.sh@49 -- # wait 68028 00:21:14.591 12:18:07 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:14.591 12:18:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.591 12:18:07 -- common/autotest_common.sh@10 -- # set +x 00:21:14.591 12:18:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.591 12:18:07 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:14.591 12:18:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.591 12:18:07 -- common/autotest_common.sh@10 -- # set +x 00:21:14.591 delay0 00:21:14.591 12:18:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.591 12:18:07 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:14.591 12:18:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.591 12:18:07 -- common/autotest_common.sh@10 -- # set +x 00:21:14.591 12:18:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.591 12:18:07 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:14.591 [2024-04-26 12:18:08.014613] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:21.149 Initializing NVMe Controllers 00:21:21.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:21.149 Initialization complete. Launching workers. 00:21:21.149 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:21:21.149 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 383, failed to submit 33 00:21:21.149 success 259, unsuccess 124, failed 0 00:21:21.149 12:18:14 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:21.149 12:18:14 -- target/zcopy.sh@60 -- # nvmftestfini 00:21:21.149 12:18:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:21.149 12:18:14 -- nvmf/common.sh@117 -- # sync 00:21:21.149 12:18:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.149 12:18:14 -- nvmf/common.sh@120 -- # set +e 00:21:21.149 12:18:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.149 12:18:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.149 rmmod nvme_tcp 00:21:21.149 rmmod nvme_fabrics 00:21:21.149 rmmod nvme_keyring 00:21:21.149 12:18:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.149 12:18:14 -- nvmf/common.sh@124 -- # set -e 00:21:21.149 12:18:14 -- nvmf/common.sh@125 -- # return 0 00:21:21.149 12:18:14 -- nvmf/common.sh@478 -- # '[' -n 67883 ']' 00:21:21.149 12:18:14 -- nvmf/common.sh@479 -- # killprocess 67883 00:21:21.149 12:18:14 -- common/autotest_common.sh@936 -- # '[' -z 67883 ']' 00:21:21.149 12:18:14 -- common/autotest_common.sh@940 -- # kill -0 67883 00:21:21.149 12:18:14 -- common/autotest_common.sh@941 -- # uname 00:21:21.149 12:18:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.149 12:18:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67883 00:21:21.149 killing process with pid 67883 00:21:21.149 12:18:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:21.149 12:18:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:21.149 12:18:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67883' 00:21:21.149 12:18:14 -- common/autotest_common.sh@955 -- # kill 67883 00:21:21.149 12:18:14 -- common/autotest_common.sh@960 -- # wait 67883 00:21:21.149 12:18:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:21.149 12:18:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:21.149 12:18:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:21.149 12:18:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.149 12:18:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:21.149 12:18:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.149 12:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.149 12:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.149 12:18:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:21.149 00:21:21.149 real 0m24.774s 00:21:21.149 user 0m40.574s 00:21:21.149 sys 0m6.948s 00:21:21.149 12:18:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:21.149 ************************************ 00:21:21.149 END TEST nvmf_zcopy 00:21:21.149 12:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:21.149 ************************************ 00:21:21.149 12:18:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:21.149 12:18:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:21.149 12:18:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:21.149 12:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:21.149 ************************************ 00:21:21.149 START TEST nvmf_nmic 00:21:21.149 ************************************ 00:21:21.149 12:18:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:21.407 * Looking for test storage... 00:21:21.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:21.407 12:18:14 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.407 12:18:14 -- nvmf/common.sh@7 -- # uname -s 00:21:21.407 12:18:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.407 12:18:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.407 12:18:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.407 12:18:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.407 12:18:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.407 12:18:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.407 12:18:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.407 12:18:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.407 12:18:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.407 12:18:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.407 12:18:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:21.407 12:18:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:21.407 12:18:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.407 12:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.407 12:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:21.407 12:18:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.407 12:18:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:21.407 12:18:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.407 12:18:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.407 12:18:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.407 12:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.407 12:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.407 12:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.407 12:18:14 -- paths/export.sh@5 -- # export PATH 00:21:21.407 12:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.407 12:18:14 -- nvmf/common.sh@47 -- # : 0 00:21:21.407 12:18:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.407 12:18:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.407 12:18:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.407 12:18:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.407 12:18:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.407 12:18:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.407 12:18:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.407 12:18:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.407 12:18:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.407 12:18:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.407 12:18:14 -- target/nmic.sh@14 -- # nvmftestinit 00:21:21.407 12:18:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:21.407 12:18:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.407 12:18:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:21.407 12:18:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:21.407 12:18:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:21.407 12:18:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.407 12:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.407 12:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.407 12:18:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:21.407 12:18:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:21.407 12:18:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:21.407 12:18:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:21.407 12:18:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:21.407 12:18:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:21.407 12:18:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.407 12:18:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.407 12:18:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:21.407 12:18:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:21.407 12:18:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:21.407 12:18:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:21.407 12:18:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:21.407 12:18:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.407 12:18:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:21.407 12:18:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:21.407 12:18:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:21.407 12:18:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:21.407 12:18:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:21.407 12:18:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:21.407 Cannot find device "nvmf_tgt_br" 00:21:21.407 12:18:14 -- nvmf/common.sh@155 -- # true 00:21:21.407 12:18:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.407 Cannot find device "nvmf_tgt_br2" 00:21:21.407 12:18:14 -- nvmf/common.sh@156 -- # true 00:21:21.407 12:18:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:21.407 12:18:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:21.407 Cannot find device "nvmf_tgt_br" 00:21:21.407 12:18:14 -- nvmf/common.sh@158 -- # true 00:21:21.407 12:18:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:21.407 Cannot find device "nvmf_tgt_br2" 00:21:21.407 12:18:14 -- nvmf/common.sh@159 -- # true 00:21:21.407 12:18:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:21.407 12:18:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:21.407 12:18:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.407 12:18:14 -- nvmf/common.sh@162 -- # true 00:21:21.407 12:18:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.407 12:18:14 -- nvmf/common.sh@163 -- # true 00:21:21.407 12:18:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:21.407 12:18:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:21.407 12:18:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:21.407 12:18:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:21.407 12:18:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:21.407 12:18:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:21.665 12:18:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:21.665 12:18:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:21.666 12:18:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:21.666 12:18:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:21.666 12:18:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:21.666 12:18:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:21.666 12:18:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:21.666 12:18:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:21.666 12:18:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:21.666 12:18:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:21.666 12:18:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:21.666 12:18:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:21.666 12:18:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:21.666 12:18:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:21.666 12:18:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:21.666 12:18:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:21.666 12:18:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:21.666 12:18:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:21.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:21.666 00:21:21.666 --- 10.0.0.2 ping statistics --- 00:21:21.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.666 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:21.666 12:18:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:21.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:21.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:21:21.666 00:21:21.666 --- 10.0.0.3 ping statistics --- 00:21:21.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.666 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:21.666 12:18:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:21.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:21.666 00:21:21.666 --- 10.0.0.1 ping statistics --- 00:21:21.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.666 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:21.666 12:18:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.666 12:18:15 -- nvmf/common.sh@422 -- # return 0 00:21:21.666 12:18:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:21.666 12:18:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.666 12:18:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:21.666 12:18:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:21.666 12:18:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.666 12:18:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:21.666 12:18:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:21.666 12:18:15 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:21.666 12:18:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:21.666 12:18:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:21.666 12:18:15 -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.666 12:18:15 -- nvmf/common.sh@470 -- # nvmfpid=68361 00:21:21.666 12:18:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.666 12:18:15 -- nvmf/common.sh@471 -- # waitforlisten 68361 00:21:21.666 12:18:15 -- common/autotest_common.sh@817 -- # '[' -z 68361 ']' 00:21:21.666 12:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.666 12:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:21.666 12:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.666 12:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:21.666 12:18:15 -- common/autotest_common.sh@10 -- # set +x 00:21:21.666 [2024-04-26 12:18:15.070145] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:21.666 [2024-04-26 12:18:15.070246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.924 [2024-04-26 12:18:15.207719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.924 [2024-04-26 12:18:15.318111] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.924 [2024-04-26 12:18:15.318188] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.924 [2024-04-26 12:18:15.318203] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.924 [2024-04-26 12:18:15.318211] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.924 [2024-04-26 12:18:15.318219] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.924 [2024-04-26 12:18:15.318334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.924 [2024-04-26 12:18:15.318999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.924 [2024-04-26 12:18:15.319024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.924 [2024-04-26 12:18:15.319032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.857 12:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:22.857 12:18:16 -- common/autotest_common.sh@850 -- # return 0 00:21:22.857 12:18:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:22.857 12:18:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 12:18:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.857 12:18:16 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 [2024-04-26 12:18:16.050047] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 Malloc0 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 [2024-04-26 12:18:16.120715] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.857 test case1: single bdev can't be used in multiple subsystems 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:22.857 12:18:16 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@28 -- # nmic_status=0 00:21:22.857 12:18:16 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 [2024-04-26 12:18:16.144570] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:22.857 [2024-04-26 12:18:16.144926] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:22.857 [2024-04-26 12:18:16.145034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:22.857 request: 00:21:22.857 { 00:21:22.857 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.857 "namespace": { 00:21:22.857 "bdev_name": "Malloc0", 00:21:22.857 "no_auto_visible": false 00:21:22.857 }, 00:21:22.857 "method": "nvmf_subsystem_add_ns", 00:21:22.857 "req_id": 1 00:21:22.857 } 00:21:22.857 Got JSON-RPC error response 00:21:22.857 response: 00:21:22.857 { 00:21:22.857 "code": -32602, 00:21:22.857 "message": "Invalid parameters" 00:21:22.857 } 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@29 -- # nmic_status=1 00:21:22.857 12:18:16 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:22.857 12:18:16 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:22.857 Adding namespace failed - expected result. 00:21:22.857 12:18:16 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:22.857 test case2: host connect to nvmf target in multiple paths 00:21:22.857 12:18:16 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:22.857 12:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.857 12:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.857 [2024-04-26 12:18:16.160706] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:22.857 12:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.857 12:18:16 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:22.857 12:18:16 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:23.115 12:18:16 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:23.115 12:18:16 -- common/autotest_common.sh@1184 -- # local i=0 00:21:23.115 12:18:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.115 12:18:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:23.115 12:18:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:25.014 12:18:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:25.014 12:18:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:25.014 12:18:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:25.014 12:18:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:25.014 12:18:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.014 12:18:18 -- common/autotest_common.sh@1194 -- # return 0 00:21:25.014 12:18:18 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:25.014 [global] 00:21:25.014 thread=1 00:21:25.014 invalidate=1 00:21:25.014 rw=write 00:21:25.014 time_based=1 00:21:25.014 runtime=1 00:21:25.014 ioengine=libaio 00:21:25.014 direct=1 00:21:25.014 bs=4096 00:21:25.014 iodepth=1 00:21:25.014 norandommap=0 00:21:25.014 numjobs=1 00:21:25.014 00:21:25.014 verify_dump=1 00:21:25.014 verify_backlog=512 00:21:25.014 verify_state_save=0 00:21:25.014 do_verify=1 00:21:25.014 verify=crc32c-intel 00:21:25.014 [job0] 00:21:25.014 filename=/dev/nvme0n1 00:21:25.273 Could not set queue depth (nvme0n1) 00:21:25.273 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:25.273 fio-3.35 00:21:25.273 Starting 1 thread 00:21:26.648 00:21:26.648 job0: (groupid=0, jobs=1): err= 0: pid=68453: Fri Apr 26 12:18:19 2024 00:21:26.648 read: IOPS=3054, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:21:26.648 slat (nsec): min=12221, max=43381, avg=15192.39, stdev=2291.08 00:21:26.648 clat (usec): min=143, max=257, avg=178.37, stdev=14.13 00:21:26.648 lat (usec): min=158, max=286, avg=193.56, stdev=14.33 00:21:26.648 clat percentiles (usec): 00:21:26.648 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:21:26.648 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:21:26.648 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:21:26.648 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 241], 99.95th=[ 249], 00:21:26.648 | 99.99th=[ 258] 00:21:26.648 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:21:26.648 slat (usec): min=17, max=242, avg=21.70, stdev= 6.06 00:21:26.648 clat (usec): min=86, max=247, avg=107.56, stdev=11.46 00:21:26.648 lat (usec): min=106, max=466, avg=129.26, stdev=14.08 00:21:26.648 clat percentiles (usec): 00:21:26.648 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:21:26.648 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:21:26.648 | 70.00th=[ 112], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 126], 00:21:26.648 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 225], 99.95th=[ 235], 00:21:26.648 | 99.99th=[ 247] 00:21:26.648 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:21:26.648 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:21:26.648 lat (usec) : 100=11.04%, 250=88.94%, 500=0.02% 00:21:26.648 cpu : usr=2.10%, sys=9.10%, ctx=6140, majf=0, minf=2 00:21:26.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.648 issued rwts: total=3058,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:26.648 00:21:26.648 Run status group 0 (all jobs): 00:21:26.648 READ: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:21:26.648 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:21:26.649 00:21:26.649 Disk stats (read/write): 00:21:26.649 nvme0n1: ios=2610/3036, merge=0/0, ticks=488/346, in_queue=834, util=91.38% 00:21:26.649 12:18:19 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:26.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:26.649 12:18:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:26.649 12:18:19 -- common/autotest_common.sh@1205 -- # local i=0 00:21:26.649 12:18:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:26.649 12:18:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:26.649 12:18:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:26.649 12:18:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:26.649 12:18:19 -- common/autotest_common.sh@1217 -- # return 0 00:21:26.649 12:18:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:26.649 12:18:19 -- target/nmic.sh@53 -- # nvmftestfini 00:21:26.649 12:18:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:26.649 12:18:19 -- nvmf/common.sh@117 -- # sync 00:21:26.649 12:18:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:26.649 12:18:19 -- nvmf/common.sh@120 -- # set +e 00:21:26.649 12:18:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.649 12:18:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:26.649 rmmod nvme_tcp 00:21:26.649 rmmod nvme_fabrics 00:21:26.649 rmmod nvme_keyring 00:21:26.649 12:18:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.649 12:18:19 -- nvmf/common.sh@124 -- # set -e 00:21:26.649 12:18:19 -- nvmf/common.sh@125 -- # return 0 00:21:26.649 12:18:19 -- nvmf/common.sh@478 -- # '[' -n 68361 ']' 00:21:26.649 12:18:19 -- nvmf/common.sh@479 -- # killprocess 68361 00:21:26.649 12:18:19 -- common/autotest_common.sh@936 -- # '[' -z 68361 ']' 00:21:26.649 12:18:19 -- common/autotest_common.sh@940 -- # kill -0 68361 00:21:26.649 12:18:19 -- common/autotest_common.sh@941 -- # uname 00:21:26.649 12:18:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:26.649 12:18:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68361 00:21:26.649 killing process with pid 68361 00:21:26.649 12:18:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:26.649 12:18:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:26.649 12:18:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68361' 00:21:26.649 12:18:19 -- common/autotest_common.sh@955 -- # kill 68361 00:21:26.649 12:18:19 -- common/autotest_common.sh@960 -- # wait 68361 00:21:26.907 12:18:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:26.907 12:18:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:26.907 12:18:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:26.907 12:18:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.907 12:18:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:26.907 12:18:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.907 12:18:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.907 12:18:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.907 12:18:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:26.907 00:21:26.907 real 0m5.701s 00:21:26.907 user 0m18.424s 00:21:26.907 sys 0m2.040s 00:21:26.907 ************************************ 00:21:26.907 END TEST nvmf_nmic 00:21:26.907 ************************************ 00:21:26.907 12:18:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:26.907 12:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:26.907 12:18:20 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:26.907 12:18:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:26.907 12:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:26.907 12:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:27.166 ************************************ 00:21:27.166 START TEST nvmf_fio_target 00:21:27.166 ************************************ 00:21:27.166 12:18:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:27.166 * Looking for test storage... 00:21:27.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:27.166 12:18:20 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:27.166 12:18:20 -- nvmf/common.sh@7 -- # uname -s 00:21:27.166 12:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.166 12:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.166 12:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.166 12:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.166 12:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.166 12:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.166 12:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.166 12:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.166 12:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.166 12:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.166 12:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:27.166 12:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:27.166 12:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.166 12:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.166 12:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:27.166 12:18:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.166 12:18:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:27.166 12:18:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.166 12:18:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.166 12:18:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.166 12:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.166 12:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.167 12:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.167 12:18:20 -- paths/export.sh@5 -- # export PATH 00:21:27.167 12:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.167 12:18:20 -- nvmf/common.sh@47 -- # : 0 00:21:27.167 12:18:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.167 12:18:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.167 12:18:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.167 12:18:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.167 12:18:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.167 12:18:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.167 12:18:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.167 12:18:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.167 12:18:20 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.167 12:18:20 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:27.167 12:18:20 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:27.167 12:18:20 -- target/fio.sh@16 -- # nvmftestinit 00:21:27.167 12:18:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:27.167 12:18:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.167 12:18:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:27.167 12:18:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:27.167 12:18:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:27.167 12:18:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.167 12:18:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.167 12:18:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.167 12:18:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:27.167 12:18:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:27.167 12:18:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:27.167 12:18:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:27.167 12:18:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:27.167 12:18:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:27.167 12:18:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.167 12:18:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.167 12:18:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:27.167 12:18:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:27.167 12:18:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:27.167 12:18:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:27.167 12:18:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:27.167 12:18:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.167 12:18:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:27.167 12:18:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:27.167 12:18:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:27.167 12:18:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:27.167 12:18:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:27.167 12:18:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:27.167 Cannot find device "nvmf_tgt_br" 00:21:27.167 12:18:20 -- nvmf/common.sh@155 -- # true 00:21:27.167 12:18:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:27.167 Cannot find device "nvmf_tgt_br2" 00:21:27.167 12:18:20 -- nvmf/common.sh@156 -- # true 00:21:27.167 12:18:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:27.167 12:18:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:27.167 Cannot find device "nvmf_tgt_br" 00:21:27.167 12:18:20 -- nvmf/common.sh@158 -- # true 00:21:27.167 12:18:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:27.167 Cannot find device "nvmf_tgt_br2" 00:21:27.167 12:18:20 -- nvmf/common.sh@159 -- # true 00:21:27.167 12:18:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:27.167 12:18:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:27.167 12:18:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:27.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:27.167 12:18:20 -- nvmf/common.sh@162 -- # true 00:21:27.167 12:18:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:27.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:27.426 12:18:20 -- nvmf/common.sh@163 -- # true 00:21:27.426 12:18:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:27.426 12:18:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:27.426 12:18:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:27.426 12:18:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:27.426 12:18:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:27.426 12:18:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:27.426 12:18:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:27.426 12:18:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:27.426 12:18:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:27.426 12:18:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:27.426 12:18:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:27.426 12:18:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:27.426 12:18:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:27.426 12:18:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:27.426 12:18:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:27.426 12:18:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:27.426 12:18:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:27.426 12:18:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:27.426 12:18:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:27.426 12:18:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:27.426 12:18:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:27.426 12:18:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:27.426 12:18:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:27.426 12:18:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:27.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:27.426 00:21:27.426 --- 10.0.0.2 ping statistics --- 00:21:27.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.426 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:27.426 12:18:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:27.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:27.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:21:27.426 00:21:27.426 --- 10.0.0.3 ping statistics --- 00:21:27.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.426 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:27.426 12:18:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:27.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:21:27.426 00:21:27.426 --- 10.0.0.1 ping statistics --- 00:21:27.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.426 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:21:27.426 12:18:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.426 12:18:20 -- nvmf/common.sh@422 -- # return 0 00:21:27.426 12:18:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:27.426 12:18:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.426 12:18:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:27.426 12:18:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:27.426 12:18:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.426 12:18:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:27.426 12:18:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:27.426 12:18:20 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:27.426 12:18:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:27.426 12:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:27.426 12:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:27.426 12:18:20 -- nvmf/common.sh@470 -- # nvmfpid=68640 00:21:27.426 12:18:20 -- nvmf/common.sh@471 -- # waitforlisten 68640 00:21:27.426 12:18:20 -- common/autotest_common.sh@817 -- # '[' -z 68640 ']' 00:21:27.426 12:18:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.426 12:18:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:27.426 12:18:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.426 12:18:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.426 12:18:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.426 12:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:27.426 [2024-04-26 12:18:20.877346] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:27.426 [2024-04-26 12:18:20.877421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.685 [2024-04-26 12:18:21.014359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.685 [2024-04-26 12:18:21.145716] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.685 [2024-04-26 12:18:21.145789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.685 [2024-04-26 12:18:21.145804] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.685 [2024-04-26 12:18:21.145814] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.685 [2024-04-26 12:18:21.145823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.685 [2024-04-26 12:18:21.145996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.685 [2024-04-26 12:18:21.146148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.685 [2024-04-26 12:18:21.146273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.685 [2024-04-26 12:18:21.146278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.621 12:18:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.621 12:18:21 -- common/autotest_common.sh@850 -- # return 0 00:21:28.621 12:18:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:28.621 12:18:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:28.621 12:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:28.621 12:18:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.621 12:18:21 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:28.621 [2024-04-26 12:18:22.056656] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.884 12:18:22 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:28.884 12:18:22 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:28.884 12:18:22 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:29.143 12:18:22 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:29.143 12:18:22 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:29.402 12:18:22 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:29.402 12:18:22 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:29.660 12:18:23 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:29.660 12:18:23 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:29.918 12:18:23 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:30.177 12:18:23 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:30.177 12:18:23 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:30.435 12:18:23 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:30.435 12:18:23 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:30.694 12:18:24 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:30.694 12:18:24 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:30.952 12:18:24 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:31.210 12:18:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:31.210 12:18:24 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.468 12:18:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:31.468 12:18:24 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.726 12:18:25 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.984 [2024-04-26 12:18:25.249670] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.984 12:18:25 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:32.242 12:18:25 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:32.499 12:18:25 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:32.499 12:18:25 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:32.499 12:18:25 -- common/autotest_common.sh@1184 -- # local i=0 00:21:32.500 12:18:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.500 12:18:25 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:21:32.500 12:18:25 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:21:32.500 12:18:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:34.403 12:18:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:34.404 12:18:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:34.404 12:18:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:34.404 12:18:27 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:21:34.404 12:18:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.404 12:18:27 -- common/autotest_common.sh@1194 -- # return 0 00:21:34.404 12:18:27 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:34.661 [global] 00:21:34.661 thread=1 00:21:34.661 invalidate=1 00:21:34.661 rw=write 00:21:34.661 time_based=1 00:21:34.661 runtime=1 00:21:34.661 ioengine=libaio 00:21:34.661 direct=1 00:21:34.661 bs=4096 00:21:34.661 iodepth=1 00:21:34.661 norandommap=0 00:21:34.661 numjobs=1 00:21:34.661 00:21:34.661 verify_dump=1 00:21:34.661 verify_backlog=512 00:21:34.661 verify_state_save=0 00:21:34.661 do_verify=1 00:21:34.661 verify=crc32c-intel 00:21:34.661 [job0] 00:21:34.661 filename=/dev/nvme0n1 00:21:34.661 [job1] 00:21:34.661 filename=/dev/nvme0n2 00:21:34.661 [job2] 00:21:34.661 filename=/dev/nvme0n3 00:21:34.661 [job3] 00:21:34.661 filename=/dev/nvme0n4 00:21:34.661 Could not set queue depth (nvme0n1) 00:21:34.661 Could not set queue depth (nvme0n2) 00:21:34.661 Could not set queue depth (nvme0n3) 00:21:34.661 Could not set queue depth (nvme0n4) 00:21:34.661 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:34.661 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:34.661 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:34.661 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:34.661 fio-3.35 00:21:34.661 Starting 4 threads 00:21:36.037 00:21:36.037 job0: (groupid=0, jobs=1): err= 0: pid=68825: Fri Apr 26 12:18:29 2024 00:21:36.037 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:21:36.037 slat (nsec): min=16400, max=65914, avg=22805.32, stdev=6721.32 00:21:36.037 clat (usec): min=166, max=775, avg=308.02, stdev=73.61 00:21:36.037 lat (usec): min=188, max=812, avg=330.82, stdev=78.00 00:21:36.037 clat percentiles (usec): 00:21:36.037 | 1.00th=[ 212], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:21:36.037 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 293], 00:21:36.037 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 429], 00:21:36.037 | 99.00th=[ 490], 99.50th=[ 562], 99.90th=[ 652], 99.95th=[ 775], 00:21:36.037 | 99.99th=[ 775] 00:21:36.037 write: IOPS=1805, BW=7221KiB/s (7394kB/s)(7228KiB/1001msec); 0 zone resets 00:21:36.037 slat (usec): min=21, max=137, avg=32.25, stdev=11.36 00:21:36.037 clat (usec): min=100, max=824, avg=234.74, stdev=79.04 00:21:36.037 lat (usec): min=126, max=859, avg=267.00, stdev=86.75 00:21:36.037 clat percentiles (usec): 00:21:36.037 | 1.00th=[ 116], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:21:36.037 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 212], 60.00th=[ 223], 00:21:36.037 | 70.00th=[ 243], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 400], 00:21:36.037 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 807], 99.95th=[ 824], 00:21:36.037 | 99.99th=[ 824] 00:21:36.037 bw ( KiB/s): min= 8192, max= 8192, per=26.31%, avg=8192.00, stdev= 0.00, samples=1 00:21:36.037 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:36.037 lat (usec) : 250=50.52%, 500=48.40%, 750=0.96%, 1000=0.12% 00:21:36.037 cpu : usr=2.20%, sys=7.10%, ctx=3343, majf=0, minf=7 00:21:36.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.037 issued rwts: total=1536,1807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:36.037 job1: (groupid=0, jobs=1): err= 0: pid=68826: Fri Apr 26 12:18:29 2024 00:21:36.037 read: IOPS=1681, BW=6725KiB/s (6887kB/s)(6732KiB/1001msec) 00:21:36.037 slat (nsec): min=12878, max=68578, avg=19069.43, stdev=5046.42 00:21:36.037 clat (usec): min=163, max=1938, avg=274.42, stdev=68.60 00:21:36.037 lat (usec): min=180, max=1959, avg=293.49, stdev=69.50 00:21:36.037 clat percentiles (usec): 00:21:36.037 | 1.00th=[ 217], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 243], 00:21:36.037 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:21:36.037 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 367], 00:21:36.037 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 1385], 99.95th=[ 1942], 00:21:36.037 | 99.99th=[ 1942] 00:21:36.037 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:36.037 slat (usec): min=18, max=136, avg=28.90, stdev=10.39 00:21:36.037 clat (usec): min=94, max=2054, avg=214.20, stdev=74.28 00:21:36.037 lat (usec): min=118, max=2084, avg=243.10, stdev=80.45 00:21:36.037 clat percentiles (usec): 00:21:36.037 | 1.00th=[ 113], 5.00th=[ 123], 10.00th=[ 133], 20.00th=[ 176], 00:21:36.037 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 200], 60.00th=[ 210], 00:21:36.037 | 70.00th=[ 237], 80.00th=[ 260], 90.00th=[ 302], 95.00th=[ 338], 00:21:36.037 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 469], 99.95th=[ 685], 00:21:36.037 | 99.99th=[ 2057] 00:21:36.037 bw ( KiB/s): min= 8192, max= 8192, per=26.31%, avg=8192.00, stdev= 0.00, samples=2 00:21:36.037 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:21:36.037 lat (usec) : 100=0.08%, 250=57.76%, 500=41.95%, 750=0.13% 00:21:36.037 lat (msec) : 2=0.05%, 4=0.03% 00:21:36.037 cpu : usr=1.80%, sys=7.20%, ctx=3732, majf=0, minf=9 00:21:36.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.038 issued rwts: total=1683,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:36.038 job2: (groupid=0, jobs=1): err= 0: pid=68827: Fri Apr 26 12:18:29 2024 00:21:36.038 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:21:36.038 slat (nsec): min=13256, max=77783, avg=21907.77, stdev=7133.26 00:21:36.038 clat (usec): min=171, max=726, avg=314.27, stdev=89.69 00:21:36.038 lat (usec): min=187, max=762, avg=336.17, stdev=94.36 00:21:36.038 clat percentiles (usec): 00:21:36.038 | 1.00th=[ 206], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:21:36.038 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 293], 00:21:36.038 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 461], 00:21:36.038 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 725], 99.95th=[ 725], 00:21:36.038 | 99.99th=[ 725] 00:21:36.038 write: IOPS=1887, BW=7548KiB/s (7730kB/s)(7556KiB/1001msec); 0 zone resets 00:21:36.038 slat (usec): min=17, max=110, avg=32.81, stdev=10.87 00:21:36.038 clat (usec): min=115, max=570, avg=218.41, stdev=54.89 00:21:36.038 lat (usec): min=138, max=598, avg=251.22, stdev=59.67 00:21:36.038 clat percentiles (usec): 00:21:36.038 | 1.00th=[ 126], 5.00th=[ 157], 10.00th=[ 174], 20.00th=[ 184], 00:21:36.038 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 215], 00:21:36.038 | 70.00th=[ 227], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 314], 00:21:36.038 | 99.00th=[ 461], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 570], 00:21:36.038 | 99.99th=[ 570] 00:21:36.038 bw ( KiB/s): min= 8192, max= 8192, per=26.31%, avg=8192.00, stdev= 0.00, samples=1 00:21:36.038 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:36.038 lat (usec) : 250=53.81%, 500=44.53%, 750=1.66% 00:21:36.038 cpu : usr=1.70%, sys=7.80%, ctx=3425, majf=0, minf=8 00:21:36.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.038 issued rwts: total=1536,1889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:36.038 job3: (groupid=0, jobs=1): err= 0: pid=68828: Fri Apr 26 12:18:29 2024 00:21:36.038 read: IOPS=1728, BW=6913KiB/s (7079kB/s)(6920KiB/1001msec) 00:21:36.038 slat (nsec): min=15471, max=72487, avg=23803.31, stdev=7577.20 00:21:36.038 clat (usec): min=165, max=864, avg=279.32, stdev=69.22 00:21:36.038 lat (usec): min=185, max=887, avg=303.12, stdev=71.77 00:21:36.038 clat percentiles (usec): 00:21:36.038 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:21:36.038 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:21:36.038 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 359], 95.00th=[ 461], 00:21:36.038 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 676], 99.95th=[ 865], 00:21:36.038 | 99.99th=[ 865] 00:21:36.038 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:36.038 slat (usec): min=16, max=130, avg=29.91, stdev= 8.71 00:21:36.038 clat (usec): min=107, max=415, avg=197.45, stdev=41.14 00:21:36.038 lat (usec): min=129, max=546, avg=227.36, stdev=45.10 00:21:36.038 clat percentiles (usec): 00:21:36.038 | 1.00th=[ 116], 5.00th=[ 129], 10.00th=[ 145], 20.00th=[ 169], 00:21:36.038 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:21:36.038 | 70.00th=[ 215], 80.00th=[ 235], 90.00th=[ 258], 95.00th=[ 269], 00:21:36.038 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 363], 99.95th=[ 388], 00:21:36.038 | 99.99th=[ 416] 00:21:36.038 bw ( KiB/s): min= 8192, max= 8192, per=26.31%, avg=8192.00, stdev= 0.00, samples=1 00:21:36.038 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:36.038 lat (usec) : 250=65.88%, 500=33.62%, 750=0.48%, 1000=0.03% 00:21:36.038 cpu : usr=1.90%, sys=8.40%, ctx=3779, majf=0, minf=13 00:21:36.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.038 issued rwts: total=1730,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:36.038 00:21:36.038 Run status group 0 (all jobs): 00:21:36.038 READ: bw=25.3MiB/s (26.5MB/s), 6138KiB/s-6913KiB/s (6285kB/s-7079kB/s), io=25.3MiB (26.6MB), run=1001-1001msec 00:21:36.038 WRITE: bw=30.4MiB/s (31.9MB/s), 7221KiB/s-8184KiB/s (7394kB/s-8380kB/s), io=30.4MiB (31.9MB), run=1001-1001msec 00:21:36.038 00:21:36.038 Disk stats (read/write): 00:21:36.038 nvme0n1: ios=1433/1536, merge=0/0, ticks=465/385, in_queue=850, util=89.18% 00:21:36.038 nvme0n2: ios=1585/1560, merge=0/0, ticks=495/372, in_queue=867, util=90.68% 00:21:36.038 nvme0n3: ios=1428/1536, merge=0/0, ticks=441/355, in_queue=796, util=89.26% 00:21:36.038 nvme0n4: ios=1557/1702, merge=0/0, ticks=469/353, in_queue=822, util=90.54% 00:21:36.038 12:18:29 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:36.038 [global] 00:21:36.038 thread=1 00:21:36.038 invalidate=1 00:21:36.038 rw=randwrite 00:21:36.038 time_based=1 00:21:36.038 runtime=1 00:21:36.038 ioengine=libaio 00:21:36.038 direct=1 00:21:36.038 bs=4096 00:21:36.038 iodepth=1 00:21:36.038 norandommap=0 00:21:36.038 numjobs=1 00:21:36.038 00:21:36.038 verify_dump=1 00:21:36.038 verify_backlog=512 00:21:36.038 verify_state_save=0 00:21:36.038 do_verify=1 00:21:36.038 verify=crc32c-intel 00:21:36.038 [job0] 00:21:36.038 filename=/dev/nvme0n1 00:21:36.038 [job1] 00:21:36.038 filename=/dev/nvme0n2 00:21:36.038 [job2] 00:21:36.038 filename=/dev/nvme0n3 00:21:36.038 [job3] 00:21:36.038 filename=/dev/nvme0n4 00:21:36.038 Could not set queue depth (nvme0n1) 00:21:36.038 Could not set queue depth (nvme0n2) 00:21:36.038 Could not set queue depth (nvme0n3) 00:21:36.038 Could not set queue depth (nvme0n4) 00:21:36.038 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:36.038 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:36.038 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:36.038 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:36.038 fio-3.35 00:21:36.038 Starting 4 threads 00:21:37.412 00:21:37.412 job0: (groupid=0, jobs=1): err= 0: pid=68881: Fri Apr 26 12:18:30 2024 00:21:37.412 read: IOPS=1676, BW=6705KiB/s (6866kB/s)(6712KiB/1001msec) 00:21:37.412 slat (nsec): min=13018, max=69707, avg=18475.66, stdev=4329.31 00:21:37.412 clat (usec): min=165, max=666, avg=303.04, stdev=74.41 00:21:37.412 lat (usec): min=180, max=693, avg=321.52, stdev=76.12 00:21:37.412 clat percentiles (usec): 00:21:37.412 | 1.00th=[ 186], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:21:37.412 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:21:37.412 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 412], 95.00th=[ 510], 00:21:37.412 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 603], 99.95th=[ 668], 00:21:37.412 | 99.99th=[ 668] 00:21:37.412 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:37.412 slat (usec): min=16, max=143, avg=25.03, stdev= 6.73 00:21:37.412 clat (usec): min=107, max=407, avg=195.83, stdev=33.95 00:21:37.412 lat (usec): min=129, max=511, avg=220.86, stdev=35.01 00:21:37.412 clat percentiles (usec): 00:21:37.412 | 1.00th=[ 115], 5.00th=[ 126], 10.00th=[ 139], 20.00th=[ 178], 00:21:37.412 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:21:37.412 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 243], 00:21:37.412 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 347], 99.95th=[ 367], 00:21:37.412 | 99.99th=[ 408] 00:21:37.412 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:21:37.412 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:37.412 lat (usec) : 250=56.07%, 500=40.79%, 750=3.14% 00:21:37.412 cpu : usr=1.40%, sys=7.00%, ctx=3728, majf=0, minf=13 00:21:37.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.412 issued rwts: total=1678,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:37.412 job1: (groupid=0, jobs=1): err= 0: pid=68882: Fri Apr 26 12:18:30 2024 00:21:37.412 read: IOPS=1581, BW=6326KiB/s (6477kB/s)(6332KiB/1001msec) 00:21:37.412 slat (nsec): min=14853, max=72690, avg=19001.52, stdev=3880.67 00:21:37.412 clat (usec): min=182, max=2175, avg=299.43, stdev=86.95 00:21:37.412 lat (usec): min=200, max=2192, avg=318.43, stdev=87.92 00:21:37.412 clat percentiles (usec): 00:21:37.412 | 1.00th=[ 223], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:21:37.412 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:21:37.412 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 424], 00:21:37.412 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 1713], 99.95th=[ 2180], 00:21:37.412 | 99.99th=[ 2180] 00:21:37.412 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:37.412 slat (usec): min=21, max=117, avg=29.40, stdev= 8.10 00:21:37.412 clat (usec): min=114, max=435, avg=208.45, stdev=54.08 00:21:37.412 lat (usec): min=144, max=482, avg=237.85, stdev=58.08 00:21:37.412 clat percentiles (usec): 00:21:37.412 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 178], 00:21:37.412 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:21:37.412 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 281], 95.00th=[ 343], 00:21:37.412 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 408], 99.95th=[ 429], 00:21:37.412 | 99.99th=[ 437] 00:21:37.412 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:21:37.412 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:37.412 lat (usec) : 250=50.90%, 500=48.14%, 750=0.83%, 1000=0.03% 00:21:37.412 lat (msec) : 2=0.08%, 4=0.03% 00:21:37.412 cpu : usr=2.00%, sys=7.10%, ctx=3631, majf=0, minf=16 00:21:37.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.412 issued rwts: total=1583,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:37.412 job2: (groupid=0, jobs=1): err= 0: pid=68883: Fri Apr 26 12:18:30 2024 00:21:37.412 read: IOPS=1745, BW=6981KiB/s (7149kB/s)(6988KiB/1001msec) 00:21:37.412 slat (nsec): min=9460, max=51079, avg=16737.04, stdev=3708.39 00:21:37.412 clat (usec): min=151, max=1137, avg=262.68, stdev=47.50 00:21:37.412 lat (usec): min=170, max=1157, avg=279.42, stdev=47.79 00:21:37.412 clat percentiles (usec): 00:21:37.412 | 1.00th=[ 169], 5.00th=[ 221], 10.00th=[ 237], 20.00th=[ 245], 00:21:37.412 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:21:37.412 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:21:37.412 | 99.00th=[ 408], 99.50th=[ 537], 99.90th=[ 955], 99.95th=[ 1139], 00:21:37.412 | 99.99th=[ 1139] 00:21:37.412 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:37.412 slat (nsec): min=13425, max=95813, avg=24300.11, stdev=6446.27 00:21:37.412 clat (usec): min=118, max=463, avg=221.92, stdev=38.33 00:21:37.413 lat (usec): min=142, max=486, avg=246.22, stdev=40.42 00:21:37.413 clat percentiles (usec): 00:21:37.413 | 1.00th=[ 145], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:21:37.413 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:21:37.413 | 70.00th=[ 229], 80.00th=[ 247], 90.00th=[ 281], 95.00th=[ 297], 00:21:37.413 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 408], 99.95th=[ 424], 00:21:37.413 | 99.99th=[ 465] 00:21:37.413 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:21:37.413 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:37.413 lat (usec) : 250=57.58%, 500=42.13%, 750=0.21%, 1000=0.05% 00:21:37.413 lat (msec) : 2=0.03% 00:21:37.413 cpu : usr=1.60%, sys=6.90%, ctx=3804, majf=0, minf=9 00:21:37.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.413 issued rwts: total=1747,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:37.413 job3: (groupid=0, jobs=1): err= 0: pid=68884: Fri Apr 26 12:18:30 2024 00:21:37.413 read: IOPS=1686, BW=6744KiB/s (6906kB/s)(6744KiB/1000msec) 00:21:37.413 slat (nsec): min=8926, max=86853, avg=16486.18, stdev=6789.78 00:21:37.413 clat (usec): min=156, max=7383, avg=275.21, stdev=210.40 00:21:37.413 lat (usec): min=173, max=7400, avg=291.69, stdev=210.83 00:21:37.413 clat percentiles (usec): 00:21:37.413 | 1.00th=[ 217], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:21:37.413 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 265], 00:21:37.413 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 314], 00:21:37.413 | 99.00th=[ 449], 99.50th=[ 537], 99.90th=[ 3556], 99.95th=[ 7373], 00:21:37.413 | 99.99th=[ 7373] 00:21:37.413 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:21:37.413 slat (usec): min=11, max=144, avg=21.89, stdev= 8.60 00:21:37.413 clat (usec): min=108, max=793, avg=222.60, stdev=40.61 00:21:37.413 lat (usec): min=132, max=817, avg=244.50, stdev=41.81 00:21:37.413 clat percentiles (usec): 00:21:37.413 | 1.00th=[ 133], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 196], 00:21:37.413 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:21:37.413 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 281], 95.00th=[ 297], 00:21:37.413 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 449], 99.95th=[ 506], 00:21:37.413 | 99.99th=[ 791] 00:21:37.413 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:21:37.413 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:37.413 lat (usec) : 250=57.34%, 500=42.29%, 750=0.21%, 1000=0.03% 00:21:37.413 lat (msec) : 2=0.05%, 4=0.05%, 10=0.03% 00:21:37.413 cpu : usr=1.60%, sys=6.20%, ctx=3746, majf=0, minf=7 00:21:37.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.413 issued rwts: total=1686,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:37.413 00:21:37.413 Run status group 0 (all jobs): 00:21:37.413 READ: bw=26.1MiB/s (27.4MB/s), 6326KiB/s-6981KiB/s (6477kB/s-7149kB/s), io=26.1MiB (27.4MB), run=1000-1001msec 00:21:37.413 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8192KiB/s (8380kB/s-8389kB/s), io=32.0MiB (33.6MB), run=1000-1001msec 00:21:37.413 00:21:37.413 Disk stats (read/write): 00:21:37.413 nvme0n1: ios=1586/1690, merge=0/0, ticks=481/349, in_queue=830, util=87.96% 00:21:37.413 nvme0n2: ios=1559/1536, merge=0/0, ticks=480/344, in_queue=824, util=88.44% 00:21:37.413 nvme0n3: ios=1536/1688, merge=0/0, ticks=400/392, in_queue=792, util=89.34% 00:21:37.413 nvme0n4: ios=1536/1607, merge=0/0, ticks=395/351, in_queue=746, util=88.96% 00:21:37.413 12:18:30 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:37.413 [global] 00:21:37.413 thread=1 00:21:37.413 invalidate=1 00:21:37.413 rw=write 00:21:37.413 time_based=1 00:21:37.413 runtime=1 00:21:37.413 ioengine=libaio 00:21:37.413 direct=1 00:21:37.413 bs=4096 00:21:37.413 iodepth=128 00:21:37.413 norandommap=0 00:21:37.413 numjobs=1 00:21:37.413 00:21:37.413 verify_dump=1 00:21:37.413 verify_backlog=512 00:21:37.413 verify_state_save=0 00:21:37.413 do_verify=1 00:21:37.413 verify=crc32c-intel 00:21:37.413 [job0] 00:21:37.413 filename=/dev/nvme0n1 00:21:37.413 [job1] 00:21:37.413 filename=/dev/nvme0n2 00:21:37.413 [job2] 00:21:37.413 filename=/dev/nvme0n3 00:21:37.413 [job3] 00:21:37.413 filename=/dev/nvme0n4 00:21:37.413 Could not set queue depth (nvme0n1) 00:21:37.413 Could not set queue depth (nvme0n2) 00:21:37.413 Could not set queue depth (nvme0n3) 00:21:37.413 Could not set queue depth (nvme0n4) 00:21:37.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:37.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:37.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:37.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:37.413 fio-3.35 00:21:37.413 Starting 4 threads 00:21:38.788 00:21:38.788 job0: (groupid=0, jobs=1): err= 0: pid=68944: Fri Apr 26 12:18:31 2024 00:21:38.788 read: IOPS=1821, BW=7285KiB/s (7460kB/s)(7300KiB/1002msec) 00:21:38.788 slat (usec): min=9, max=12214, avg=299.49, stdev=1605.96 00:21:38.788 clat (usec): min=510, max=51351, avg=37134.60, stdev=8760.07 00:21:38.788 lat (usec): min=2411, max=51376, avg=37434.09, stdev=8666.31 00:21:38.788 clat percentiles (usec): 00:21:38.788 | 1.00th=[ 2704], 5.00th=[19268], 10.00th=[29230], 20.00th=[34866], 00:21:38.788 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[36439], 00:21:38.788 | 70.00th=[39060], 80.00th=[44827], 90.00th=[49546], 95.00th=[50594], 00:21:38.788 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:21:38.788 | 99.99th=[51119] 00:21:38.788 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:21:38.788 slat (usec): min=11, max=12433, avg=214.28, stdev=1105.03 00:21:38.788 clat (usec): min=17633, max=45197, avg=28253.83, stdev=3782.70 00:21:38.788 lat (usec): min=23698, max=45214, avg=28468.11, stdev=3612.14 00:21:38.788 clat percentiles (usec): 00:21:38.788 | 1.00th=[20579], 5.00th=[23987], 10.00th=[24773], 20.00th=[25035], 00:21:38.788 | 30.00th=[25297], 40.00th=[25560], 50.00th=[28443], 60.00th=[29492], 00:21:38.788 | 70.00th=[30016], 80.00th=[30540], 90.00th=[32375], 95.00th=[33162], 00:21:38.788 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:21:38.788 | 99.99th=[45351] 00:21:38.788 bw ( KiB/s): min= 8192, max= 8192, per=16.06%, avg=8192.00, stdev= 0.00, samples=2 00:21:38.788 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:21:38.788 lat (usec) : 750=0.03% 00:21:38.788 lat (msec) : 4=0.83%, 20=2.09%, 50=93.78%, 100=3.28% 00:21:38.788 cpu : usr=1.90%, sys=5.39%, ctx=122, majf=0, minf=17 00:21:38.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:38.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.788 issued rwts: total=1825,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.788 job1: (groupid=0, jobs=1): err= 0: pid=68945: Fri Apr 26 12:18:31 2024 00:21:38.788 read: IOPS=2423, BW=9693KiB/s (9926kB/s)(9732KiB/1004msec) 00:21:38.788 slat (usec): min=7, max=12139, avg=224.88, stdev=1284.55 00:21:38.788 clat (usec): min=2761, max=50917, avg=28230.43, stdev=10191.22 00:21:38.788 lat (usec): min=9664, max=50934, avg=28455.32, stdev=10192.77 00:21:38.788 clat percentiles (usec): 00:21:38.788 | 1.00th=[10159], 5.00th=[18744], 10.00th=[20579], 20.00th=[20841], 00:21:38.788 | 30.00th=[21365], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:21:38.788 | 70.00th=[26084], 80.00th=[35390], 90.00th=[49021], 95.00th=[50594], 00:21:38.788 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:21:38.788 | 99.99th=[51119] 00:21:38.788 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:21:38.788 slat (usec): min=13, max=11676, avg=167.56, stdev=896.54 00:21:38.788 clat (usec): min=12811, max=36650, avg=22534.63, stdev=6744.23 00:21:38.789 lat (usec): min=14667, max=36670, avg=22702.19, stdev=6734.22 00:21:38.789 clat percentiles (usec): 00:21:38.789 | 1.00th=[13435], 5.00th=[14746], 10.00th=[15008], 20.00th=[15139], 00:21:38.789 | 30.00th=[15533], 40.00th=[19530], 50.00th=[19792], 60.00th=[25297], 00:21:38.789 | 70.00th=[29492], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:21:38.789 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:21:38.789 | 99.99th=[36439] 00:21:38.789 bw ( KiB/s): min= 9208, max=11294, per=20.10%, avg=10251.00, stdev=1475.02, samples=2 00:21:38.789 iops : min= 2302, max= 2823, avg=2562.50, stdev=368.40, samples=2 00:21:38.789 lat (msec) : 4=0.02%, 10=0.32%, 20=29.58%, 50=67.59%, 100=2.48% 00:21:38.789 cpu : usr=2.89%, sys=7.98%, ctx=188, majf=0, minf=9 00:21:38.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:38.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.789 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.789 job2: (groupid=0, jobs=1): err= 0: pid=68946: Fri Apr 26 12:18:31 2024 00:21:38.789 read: IOPS=5677, BW=22.2MiB/s (23.3MB/s)(22.2MiB/1003msec) 00:21:38.789 slat (usec): min=8, max=5441, avg=79.60, stdev=484.20 00:21:38.789 clat (usec): min=1076, max=18082, avg=11162.63, stdev=1329.83 00:21:38.789 lat (usec): min=4903, max=21304, avg=11242.23, stdev=1349.50 00:21:38.789 clat percentiles (usec): 00:21:38.789 | 1.00th=[ 5866], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[10814], 00:21:38.789 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:21:38.789 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[12125], 00:21:38.789 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:21:38.789 | 99.99th=[17957] 00:21:38.789 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:21:38.789 slat (usec): min=9, max=7958, avg=81.97, stdev=462.27 00:21:38.789 clat (usec): min=5603, max=14815, avg=10342.50, stdev=962.86 00:21:38.789 lat (usec): min=6033, max=14834, avg=10424.48, stdev=869.42 00:21:38.789 clat percentiles (usec): 00:21:38.789 | 1.00th=[ 6980], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:21:38.789 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:21:38.789 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:21:38.789 | 99.00th=[14615], 99.50th=[14746], 99.90th=[14746], 99.95th=[14746], 00:21:38.789 | 99.99th=[14877] 00:21:38.789 bw ( KiB/s): min=24056, max=24576, per=47.68%, avg=24316.00, stdev=367.70, samples=2 00:21:38.789 iops : min= 6014, max= 6144, avg=6079.00, stdev=91.92, samples=2 00:21:38.789 lat (msec) : 2=0.01%, 10=17.15%, 20=82.84% 00:21:38.789 cpu : usr=4.39%, sys=16.17%, ctx=252, majf=0, minf=13 00:21:38.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:38.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.789 issued rwts: total=5695,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.789 job3: (groupid=0, jobs=1): err= 0: pid=68947: Fri Apr 26 12:18:31 2024 00:21:38.789 read: IOPS=1785, BW=7143KiB/s (7315kB/s)(7172KiB/1004msec) 00:21:38.789 slat (usec): min=6, max=12123, avg=303.13, stdev=1613.63 00:21:38.789 clat (usec): min=2094, max=50833, avg=37721.92, stdev=7566.70 00:21:38.789 lat (usec): min=9260, max=50882, avg=38025.05, stdev=7437.69 00:21:38.789 clat percentiles (usec): 00:21:38.789 | 1.00th=[ 9634], 5.00th=[27919], 10.00th=[31327], 20.00th=[34866], 00:21:38.789 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[36439], 00:21:38.789 | 70.00th=[40633], 80.00th=[44827], 90.00th=[49546], 95.00th=[50594], 00:21:38.789 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:21:38.789 | 99.99th=[50594] 00:21:38.789 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:21:38.789 slat (usec): min=11, max=12091, avg=215.18, stdev=1107.23 00:21:38.789 clat (usec): min=16787, max=45231, avg=28360.43, stdev=3858.98 00:21:38.789 lat (usec): min=23131, max=45250, avg=28575.62, stdev=3694.08 00:21:38.789 clat percentiles (usec): 00:21:38.789 | 1.00th=[20841], 5.00th=[23462], 10.00th=[24511], 20.00th=[25035], 00:21:38.789 | 30.00th=[25297], 40.00th=[25822], 50.00th=[29492], 60.00th=[29754], 00:21:38.789 | 70.00th=[30016], 80.00th=[30540], 90.00th=[32637], 95.00th=[33162], 00:21:38.789 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:21:38.789 | 99.99th=[45351] 00:21:38.789 bw ( KiB/s): min= 8192, max= 8208, per=16.08%, avg=8200.00, stdev=11.31, samples=2 00:21:38.789 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:21:38.789 lat (msec) : 4=0.03%, 10=0.76%, 20=1.33%, 50=94.77%, 100=3.12% 00:21:38.789 cpu : usr=1.60%, sys=6.48%, ctx=123, majf=0, minf=9 00:21:38.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:38.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.789 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.789 00:21:38.789 Run status group 0 (all jobs): 00:21:38.789 READ: bw=45.7MiB/s (47.9MB/s), 7143KiB/s-22.2MiB/s (7315kB/s-23.3MB/s), io=45.9MiB (48.1MB), run=1002-1004msec 00:21:38.789 WRITE: bw=49.8MiB/s (52.2MB/s), 8159KiB/s-23.9MiB/s (8355kB/s-25.1MB/s), io=50.0MiB (52.4MB), run=1002-1004msec 00:21:38.789 00:21:38.789 Disk stats (read/write): 00:21:38.789 nvme0n1: ios=1586/1792, merge=0/0, ticks=13912/10125, in_queue=24037, util=89.38% 00:21:38.789 nvme0n2: ios=2097/2336, merge=0/0, ticks=14758/11012, in_queue=25770, util=89.30% 00:21:38.789 nvme0n3: ios=5110/5120, merge=0/0, ticks=53243/48483, in_queue=101726, util=91.19% 00:21:38.789 nvme0n4: ios=1536/1792, merge=0/0, ticks=14883/11237, in_queue=26120, util=89.70% 00:21:38.789 12:18:31 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:38.789 [global] 00:21:38.789 thread=1 00:21:38.789 invalidate=1 00:21:38.789 rw=randwrite 00:21:38.789 time_based=1 00:21:38.789 runtime=1 00:21:38.789 ioengine=libaio 00:21:38.789 direct=1 00:21:38.789 bs=4096 00:21:38.789 iodepth=128 00:21:38.789 norandommap=0 00:21:38.789 numjobs=1 00:21:38.789 00:21:38.789 verify_dump=1 00:21:38.789 verify_backlog=512 00:21:38.789 verify_state_save=0 00:21:38.789 do_verify=1 00:21:38.789 verify=crc32c-intel 00:21:38.789 [job0] 00:21:38.789 filename=/dev/nvme0n1 00:21:38.789 [job1] 00:21:38.789 filename=/dev/nvme0n2 00:21:38.789 [job2] 00:21:38.789 filename=/dev/nvme0n3 00:21:38.789 [job3] 00:21:38.789 filename=/dev/nvme0n4 00:21:38.789 Could not set queue depth (nvme0n1) 00:21:38.789 Could not set queue depth (nvme0n2) 00:21:38.789 Could not set queue depth (nvme0n3) 00:21:38.789 Could not set queue depth (nvme0n4) 00:21:38.789 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:38.789 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:38.789 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:38.789 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:38.789 fio-3.35 00:21:38.789 Starting 4 threads 00:21:40.325 00:21:40.325 job0: (groupid=0, jobs=1): err= 0: pid=69006: Fri Apr 26 12:18:33 2024 00:21:40.325 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:21:40.325 slat (usec): min=8, max=5678, avg=80.63, stdev=488.55 00:21:40.325 clat (usec): min=5921, max=18750, avg=11419.77, stdev=1430.27 00:21:40.325 lat (usec): min=5931, max=22238, avg=11500.40, stdev=1454.91 00:21:40.325 clat percentiles (usec): 00:21:40.325 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10552], 00:21:40.325 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:21:40.325 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12780], 00:21:40.325 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:21:40.325 | 99.99th=[18744] 00:21:40.325 write: IOPS=6066, BW=23.7MiB/s (24.8MB/s)(23.7MiB/1002msec); 0 zone resets 00:21:40.325 slat (usec): min=10, max=6950, avg=82.32, stdev=463.12 00:21:40.325 clat (usec): min=464, max=14279, avg=10304.78, stdev=1286.43 00:21:40.325 lat (usec): min=3724, max=14380, avg=10387.10, stdev=1220.62 00:21:40.325 clat percentiles (usec): 00:21:40.325 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:21:40.325 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:21:40.325 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11731], 00:21:40.325 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14222], 99.95th=[14222], 00:21:40.325 | 99.99th=[14222] 00:21:40.325 bw ( KiB/s): min=23504, max=24104, per=38.00%, avg=23804.00, stdev=424.26, samples=2 00:21:40.325 iops : min= 5876, max= 6026, avg=5951.00, stdev=106.07, samples=2 00:21:40.325 lat (usec) : 500=0.01% 00:21:40.325 lat (msec) : 4=0.10%, 10=25.90%, 20=73.99% 00:21:40.325 cpu : usr=4.50%, sys=16.48%, ctx=251, majf=0, minf=12 00:21:40.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:40.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.325 issued rwts: total=5632,6079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.325 job1: (groupid=0, jobs=1): err= 0: pid=69007: Fri Apr 26 12:18:33 2024 00:21:40.325 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:21:40.325 slat (usec): min=8, max=20226, avg=185.36, stdev=1266.24 00:21:40.325 clat (usec): min=15037, max=40892, avg=25353.06, stdev=3273.32 00:21:40.325 lat (usec): min=15053, max=48516, avg=25538.42, stdev=3328.11 00:21:40.325 clat percentiles (usec): 00:21:40.325 | 1.00th=[15664], 5.00th=[20317], 10.00th=[21627], 20.00th=[24249], 00:21:40.325 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:21:40.325 | 70.00th=[26346], 80.00th=[26346], 90.00th=[27657], 95.00th=[31065], 00:21:40.325 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:21:40.325 | 99.99th=[40633] 00:21:40.325 write: IOPS=2864, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1005msec); 0 zone resets 00:21:40.325 slat (usec): min=12, max=19354, avg=173.67, stdev=1159.36 00:21:40.325 clat (usec): min=3107, max=33365, avg=21657.09, stdev=4727.64 00:21:40.325 lat (usec): min=9588, max=33392, avg=21830.77, stdev=4643.48 00:21:40.325 clat percentiles (usec): 00:21:40.325 | 1.00th=[10290], 5.00th=[10814], 10.00th=[14222], 20.00th=[19268], 00:21:40.325 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22938], 60.00th=[23725], 00:21:40.325 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[25822], 00:21:40.325 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:21:40.325 | 99.99th=[33424] 00:21:40.325 bw ( KiB/s): min=10240, max=11791, per=17.58%, avg=11015.50, stdev=1096.72, samples=2 00:21:40.325 iops : min= 2560, max= 2947, avg=2753.50, stdev=273.65, samples=2 00:21:40.325 lat (msec) : 4=0.02%, 10=0.17%, 20=13.27%, 50=86.54% 00:21:40.325 cpu : usr=3.59%, sys=8.37%, ctx=116, majf=0, minf=9 00:21:40.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:40.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.325 issued rwts: total=2560,2879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.325 job2: (groupid=0, jobs=1): err= 0: pid=69008: Fri Apr 26 12:18:33 2024 00:21:40.325 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:21:40.325 slat (usec): min=8, max=23730, avg=121.14, stdev=880.54 00:21:40.325 clat (usec): min=7527, max=79906, avg=17045.09, stdev=10946.78 00:21:40.325 lat (usec): min=7543, max=79960, avg=17166.22, stdev=11029.10 00:21:40.325 clat percentiles (usec): 00:21:40.325 | 1.00th=[ 8717], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:21:40.325 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:21:40.325 | 70.00th=[14091], 80.00th=[14484], 90.00th=[20055], 95.00th=[47449], 00:21:40.325 | 99.00th=[66323], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:21:40.325 | 99.99th=[80217] 00:21:40.325 write: IOPS=4233, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1008msec); 0 zone resets 00:21:40.325 slat (usec): min=7, max=10037, avg=110.37, stdev=681.15 00:21:40.325 clat (usec): min=5312, max=45772, avg=13553.52, stdev=4943.46 00:21:40.325 lat (usec): min=7068, max=49197, avg=13663.88, stdev=4949.80 00:21:40.325 clat percentiles (usec): 00:21:40.325 | 1.00th=[ 7111], 5.00th=[ 8979], 10.00th=[11338], 20.00th=[11731], 00:21:40.325 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:21:40.325 | 70.00th=[13042], 80.00th=[14484], 90.00th=[16057], 95.00th=[18220], 00:21:40.325 | 99.00th=[41157], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:21:40.326 | 99.99th=[45876] 00:21:40.326 bw ( KiB/s): min=12632, max=20480, per=26.43%, avg=16556.00, stdev=5549.37, samples=2 00:21:40.326 iops : min= 3158, max= 5120, avg=4139.00, stdev=1387.34, samples=2 00:21:40.326 lat (msec) : 10=4.40%, 20=88.83%, 50=4.60%, 100=2.16% 00:21:40.326 cpu : usr=4.77%, sys=10.82%, ctx=200, majf=0, minf=7 00:21:40.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:40.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.326 issued rwts: total=4096,4267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.326 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.326 job3: (groupid=0, jobs=1): err= 0: pid=69009: Fri Apr 26 12:18:33 2024 00:21:40.326 read: IOPS=2285, BW=9140KiB/s (9359kB/s)(9204KiB/1007msec) 00:21:40.326 slat (usec): min=9, max=18081, avg=203.41, stdev=1297.30 00:21:40.326 clat (usec): min=5073, max=58656, avg=27252.64, stdev=5863.59 00:21:40.326 lat (usec): min=13197, max=63719, avg=27456.05, stdev=5905.44 00:21:40.326 clat percentiles (usec): 00:21:40.326 | 1.00th=[15664], 5.00th=[23462], 10.00th=[24511], 20.00th=[25297], 00:21:40.326 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:21:40.326 | 70.00th=[26346], 80.00th=[27132], 90.00th=[33817], 95.00th=[41681], 00:21:40.326 | 99.00th=[50070], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:21:40.326 | 99.99th=[58459] 00:21:40.326 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:21:40.326 slat (usec): min=7, max=20123, avg=198.73, stdev=1280.23 00:21:40.326 clat (usec): min=12403, max=51938, avg=25310.87, stdev=5089.80 00:21:40.326 lat (usec): min=16494, max=51973, avg=25509.59, stdev=5030.94 00:21:40.326 clat percentiles (usec): 00:21:40.326 | 1.00th=[14877], 5.00th=[21627], 10.00th=[21890], 20.00th=[22414], 00:21:40.326 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[24511], 00:21:40.326 | 70.00th=[24773], 80.00th=[25297], 90.00th=[33162], 95.00th=[40109], 00:21:40.326 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[47973], 00:21:40.326 | 99.99th=[52167] 00:21:40.326 bw ( KiB/s): min= 8712, max=11791, per=16.37%, avg=10251.50, stdev=2177.18, samples=2 00:21:40.326 iops : min= 2178, max= 2947, avg=2562.50, stdev=543.77, samples=2 00:21:40.326 lat (msec) : 10=0.02%, 20=2.88%, 50=96.67%, 100=0.43% 00:21:40.326 cpu : usr=2.78%, sys=8.45%, ctx=173, majf=0, minf=13 00:21:40.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:40.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.326 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.326 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.326 00:21:40.326 Run status group 0 (all jobs): 00:21:40.326 READ: bw=56.5MiB/s (59.3MB/s), 9140KiB/s-22.0MiB/s (9359kB/s-23.0MB/s), io=57.0MiB (59.8MB), run=1002-1008msec 00:21:40.326 WRITE: bw=61.2MiB/s (64.1MB/s), 9.93MiB/s-23.7MiB/s (10.4MB/s-24.8MB/s), io=61.7MiB (64.7MB), run=1002-1008msec 00:21:40.326 00:21:40.326 Disk stats (read/write): 00:21:40.326 nvme0n1: ios=4658/5056, merge=0/0, ticks=50896/49276, in_queue=100172, util=87.66% 00:21:40.326 nvme0n2: ios=2097/2368, merge=0/0, ticks=50490/52112, in_queue=102602, util=88.15% 00:21:40.326 nvme0n3: ios=3801/4096, merge=0/0, ticks=48752/47471, in_queue=96223, util=88.46% 00:21:40.326 nvme0n4: ios=2048/2241, merge=0/0, ticks=50427/50016, in_queue=100443, util=89.44% 00:21:40.326 12:18:33 -- target/fio.sh@55 -- # sync 00:21:40.326 12:18:33 -- target/fio.sh@59 -- # fio_pid=69022 00:21:40.326 12:18:33 -- target/fio.sh@61 -- # sleep 3 00:21:40.326 12:18:33 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:40.326 [global] 00:21:40.326 thread=1 00:21:40.326 invalidate=1 00:21:40.326 rw=read 00:21:40.326 time_based=1 00:21:40.326 runtime=10 00:21:40.326 ioengine=libaio 00:21:40.326 direct=1 00:21:40.326 bs=4096 00:21:40.326 iodepth=1 00:21:40.326 norandommap=1 00:21:40.326 numjobs=1 00:21:40.326 00:21:40.326 [job0] 00:21:40.326 filename=/dev/nvme0n1 00:21:40.326 [job1] 00:21:40.326 filename=/dev/nvme0n2 00:21:40.326 [job2] 00:21:40.326 filename=/dev/nvme0n3 00:21:40.326 [job3] 00:21:40.326 filename=/dev/nvme0n4 00:21:40.326 Could not set queue depth (nvme0n1) 00:21:40.326 Could not set queue depth (nvme0n2) 00:21:40.326 Could not set queue depth (nvme0n3) 00:21:40.326 Could not set queue depth (nvme0n4) 00:21:40.326 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.326 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.326 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.326 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.326 fio-3.35 00:21:40.326 Starting 4 threads 00:21:43.618 12:18:36 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:43.618 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45727744, buflen=4096 00:21:43.618 fio: pid=69065, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:43.618 12:18:36 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:43.618 fio: pid=69064, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:43.619 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=51904512, buflen=4096 00:21:43.619 12:18:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:43.619 12:18:36 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:43.876 fio: pid=69062, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:43.876 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10301440, buflen=4096 00:21:43.876 12:18:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:43.876 12:18:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:44.134 fio: pid=69063, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:44.134 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5697536, buflen=4096 00:21:44.134 00:21:44.134 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69062: Fri Apr 26 12:18:37 2024 00:21:44.134 read: IOPS=5407, BW=21.1MiB/s (22.1MB/s)(73.8MiB/3495msec) 00:21:44.134 slat (usec): min=8, max=12110, avg=16.67, stdev=149.66 00:21:44.134 clat (usec): min=132, max=2005, avg=166.80, stdev=33.62 00:21:44.134 lat (usec): min=147, max=12295, avg=183.47, stdev=154.43 00:21:44.134 clat percentiles (usec): 00:21:44.134 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:21:44.134 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:21:44.134 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 208], 00:21:44.134 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 453], 00:21:44.134 | 99.99th=[ 1729] 00:21:44.134 bw ( KiB/s): min=21216, max=22864, per=34.51%, avg=22249.33, stdev=620.18, samples=6 00:21:44.134 iops : min= 5304, max= 5716, avg=5562.33, stdev=155.05, samples=6 00:21:44.134 lat (usec) : 250=98.04%, 500=1.90%, 750=0.01%, 1000=0.01% 00:21:44.134 lat (msec) : 2=0.02%, 4=0.01% 00:21:44.134 cpu : usr=1.66%, sys=7.16%, ctx=18910, majf=0, minf=1 00:21:44.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:44.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.134 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.134 issued rwts: total=18900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:44.134 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69063: Fri Apr 26 12:18:37 2024 00:21:44.134 read: IOPS=4735, BW=18.5MiB/s (19.4MB/s)(69.4MiB/3754msec) 00:21:44.134 slat (usec): min=8, max=15291, avg=18.15, stdev=187.69 00:21:44.134 clat (usec): min=50, max=2783, avg=191.60, stdev=40.62 00:21:44.134 lat (usec): min=169, max=15504, avg=209.75, stdev=193.56 00:21:44.134 clat percentiles (usec): 00:21:44.134 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:21:44.134 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:21:44.134 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 239], 00:21:44.134 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 482], 99.95th=[ 742], 00:21:44.134 | 99.99th=[ 2638] 00:21:44.134 bw ( KiB/s): min=16828, max=19752, per=29.42%, avg=18967.43, stdev=1054.41, samples=7 00:21:44.134 iops : min= 4207, max= 4938, avg=4741.86, stdev=263.60, samples=7 00:21:44.134 lat (usec) : 100=0.01%, 250=97.40%, 500=2.50%, 750=0.05%, 1000=0.01% 00:21:44.134 lat (msec) : 2=0.02%, 4=0.02% 00:21:44.134 cpu : usr=1.41%, sys=6.08%, ctx=17799, majf=0, minf=1 00:21:44.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:44.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.134 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.134 issued rwts: total=17776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:44.134 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69064: Fri Apr 26 12:18:37 2024 00:21:44.134 read: IOPS=3933, BW=15.4MiB/s (16.1MB/s)(49.5MiB/3222msec) 00:21:44.134 slat (usec): min=8, max=9996, avg=16.10, stdev=112.81 00:21:44.134 clat (usec): min=142, max=3453, avg=236.67, stdev=58.52 00:21:44.134 lat (usec): min=155, max=10179, avg=252.77, stdev=126.03 00:21:44.134 clat percentiles (usec): 00:21:44.134 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:21:44.134 | 30.00th=[ 200], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:21:44.134 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:21:44.134 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 914], 00:21:44.134 | 99.99th=[ 2540] 00:21:44.134 bw ( KiB/s): min=14456, max=19656, per=24.05%, avg=15505.33, stdev=2049.61, samples=6 00:21:44.134 iops : min= 3614, max= 4914, avg=3876.33, stdev=512.40, samples=6 00:21:44.134 lat (usec) : 250=46.09%, 500=53.83%, 750=0.02%, 1000=0.02% 00:21:44.134 lat (msec) : 2=0.02%, 4=0.02% 00:21:44.134 cpu : usr=1.43%, sys=5.40%, ctx=12680, majf=0, minf=1 00:21:44.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:44.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.134 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.134 issued rwts: total=12673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:44.134 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69065: Fri Apr 26 12:18:37 2024 00:21:44.134 read: IOPS=3789, BW=14.8MiB/s (15.5MB/s)(43.6MiB/2946msec) 00:21:44.134 slat (usec): min=8, max=184, avg=12.92, stdev= 4.06 00:21:44.134 clat (usec): min=150, max=7756, avg=249.42, stdev=111.08 00:21:44.134 lat (usec): min=164, max=7769, avg=262.34, stdev=110.63 00:21:44.134 clat percentiles (usec): 00:21:44.134 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 206], 00:21:44.134 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:21:44.134 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:21:44.134 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 537], 99.95th=[ 2343], 00:21:44.134 | 99.99th=[ 6325] 00:21:44.134 bw ( KiB/s): min=14456, max=18424, per=23.79%, avg=15340.80, stdev=1733.32, samples=5 00:21:44.134 iops : min= 3614, max= 4606, avg=3835.20, stdev=433.33, samples=5 00:21:44.135 lat (usec) : 250=35.75%, 500=64.14%, 750=0.02%, 1000=0.03% 00:21:44.135 lat (msec) : 2=0.01%, 4=0.04%, 10=0.02% 00:21:44.135 cpu : usr=1.36%, sys=4.35%, ctx=11166, majf=0, minf=1 00:21:44.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:44.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.135 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.135 issued rwts: total=11165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:44.135 00:21:44.135 Run status group 0 (all jobs): 00:21:44.135 READ: bw=63.0MiB/s (66.0MB/s), 14.8MiB/s-21.1MiB/s (15.5MB/s-22.1MB/s), io=236MiB (248MB), run=2946-3754msec 00:21:44.135 00:21:44.135 Disk stats (read/write): 00:21:44.135 nvme0n1: ios=18427/0, merge=0/0, ticks=3074/0, in_queue=3074, util=95.31% 00:21:44.135 nvme0n2: ios=17072/0, merge=0/0, ticks=3314/0, in_queue=3314, util=95.40% 00:21:44.135 nvme0n3: ios=12151/0, merge=0/0, ticks=2864/0, in_queue=2864, util=96.37% 00:21:44.135 nvme0n4: ios=10899/0, merge=0/0, ticks=2582/0, in_queue=2582, util=96.42% 00:21:44.135 12:18:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:44.135 12:18:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:44.391 12:18:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:44.391 12:18:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:44.648 12:18:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:44.648 12:18:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:44.906 12:18:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:44.906 12:18:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:45.190 12:18:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:45.190 12:18:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:45.465 12:18:38 -- target/fio.sh@69 -- # fio_status=0 00:21:45.465 12:18:38 -- target/fio.sh@70 -- # wait 69022 00:21:45.465 12:18:38 -- target/fio.sh@70 -- # fio_status=4 00:21:45.465 12:18:38 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:45.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:45.465 12:18:38 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:45.465 12:18:38 -- common/autotest_common.sh@1205 -- # local i=0 00:21:45.465 12:18:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:45.465 12:18:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:45.465 12:18:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:45.465 12:18:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:45.465 nvmf hotplug test: fio failed as expected 00:21:45.465 12:18:38 -- common/autotest_common.sh@1217 -- # return 0 00:21:45.465 12:18:38 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:45.465 12:18:38 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:45.465 12:18:38 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.723 12:18:39 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:45.723 12:18:39 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:45.723 12:18:39 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:45.723 12:18:39 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:45.723 12:18:39 -- target/fio.sh@91 -- # nvmftestfini 00:21:45.723 12:18:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:45.723 12:18:39 -- nvmf/common.sh@117 -- # sync 00:21:45.723 12:18:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.723 12:18:39 -- nvmf/common.sh@120 -- # set +e 00:21:45.723 12:18:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.723 12:18:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.723 rmmod nvme_tcp 00:21:45.723 rmmod nvme_fabrics 00:21:45.723 rmmod nvme_keyring 00:21:45.723 12:18:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.723 12:18:39 -- nvmf/common.sh@124 -- # set -e 00:21:45.723 12:18:39 -- nvmf/common.sh@125 -- # return 0 00:21:45.723 12:18:39 -- nvmf/common.sh@478 -- # '[' -n 68640 ']' 00:21:45.723 12:18:39 -- nvmf/common.sh@479 -- # killprocess 68640 00:21:45.723 12:18:39 -- common/autotest_common.sh@936 -- # '[' -z 68640 ']' 00:21:45.723 12:18:39 -- common/autotest_common.sh@940 -- # kill -0 68640 00:21:45.723 12:18:39 -- common/autotest_common.sh@941 -- # uname 00:21:45.723 12:18:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.723 12:18:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68640 00:21:45.724 killing process with pid 68640 00:21:45.724 12:18:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.724 12:18:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.724 12:18:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68640' 00:21:45.724 12:18:39 -- common/autotest_common.sh@955 -- # kill 68640 00:21:45.724 12:18:39 -- common/autotest_common.sh@960 -- # wait 68640 00:21:45.981 12:18:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:45.981 12:18:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:45.981 12:18:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:45.981 12:18:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.981 12:18:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.981 12:18:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.981 12:18:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.981 12:18:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.239 12:18:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:46.239 ************************************ 00:21:46.239 END TEST nvmf_fio_target 00:21:46.239 ************************************ 00:21:46.239 00:21:46.239 real 0m19.077s 00:21:46.239 user 1m11.114s 00:21:46.239 sys 0m10.642s 00:21:46.239 12:18:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:46.239 12:18:39 -- common/autotest_common.sh@10 -- # set +x 00:21:46.239 12:18:39 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:46.239 12:18:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:46.239 12:18:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:46.239 12:18:39 -- common/autotest_common.sh@10 -- # set +x 00:21:46.239 ************************************ 00:21:46.239 START TEST nvmf_bdevio 00:21:46.239 ************************************ 00:21:46.239 12:18:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:46.239 * Looking for test storage... 00:21:46.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:46.239 12:18:39 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.239 12:18:39 -- nvmf/common.sh@7 -- # uname -s 00:21:46.239 12:18:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.239 12:18:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.239 12:18:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.239 12:18:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.239 12:18:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.239 12:18:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.239 12:18:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.239 12:18:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.240 12:18:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.240 12:18:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.240 12:18:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:46.240 12:18:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:46.240 12:18:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.240 12:18:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.240 12:18:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.240 12:18:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.240 12:18:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.240 12:18:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.240 12:18:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.240 12:18:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.240 12:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.240 12:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.240 12:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.240 12:18:39 -- paths/export.sh@5 -- # export PATH 00:21:46.240 12:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.240 12:18:39 -- nvmf/common.sh@47 -- # : 0 00:21:46.240 12:18:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.240 12:18:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.240 12:18:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.240 12:18:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.240 12:18:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.240 12:18:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.240 12:18:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.240 12:18:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.240 12:18:39 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.240 12:18:39 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.240 12:18:39 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:46.240 12:18:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:46.240 12:18:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.240 12:18:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:46.240 12:18:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:46.240 12:18:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:46.240 12:18:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.240 12:18:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.240 12:18:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.498 12:18:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:46.498 12:18:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:46.498 12:18:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:46.498 12:18:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:46.498 12:18:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:46.498 12:18:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:46.498 12:18:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.498 12:18:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.498 12:18:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:46.498 12:18:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:46.498 12:18:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.498 12:18:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.498 12:18:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.498 12:18:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.498 12:18:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.498 12:18:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.498 12:18:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.498 12:18:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.498 12:18:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:46.498 12:18:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:46.498 Cannot find device "nvmf_tgt_br" 00:21:46.498 12:18:39 -- nvmf/common.sh@155 -- # true 00:21:46.498 12:18:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.498 Cannot find device "nvmf_tgt_br2" 00:21:46.498 12:18:39 -- nvmf/common.sh@156 -- # true 00:21:46.498 12:18:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:46.498 12:18:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:46.498 Cannot find device "nvmf_tgt_br" 00:21:46.498 12:18:39 -- nvmf/common.sh@158 -- # true 00:21:46.498 12:18:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:46.498 Cannot find device "nvmf_tgt_br2" 00:21:46.498 12:18:39 -- nvmf/common.sh@159 -- # true 00:21:46.498 12:18:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:46.499 12:18:39 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:46.499 12:18:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.499 12:18:39 -- nvmf/common.sh@162 -- # true 00:21:46.499 12:18:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.499 12:18:39 -- nvmf/common.sh@163 -- # true 00:21:46.499 12:18:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.499 12:18:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.499 12:18:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.499 12:18:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.499 12:18:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.499 12:18:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.499 12:18:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.499 12:18:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.499 12:18:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:46.499 12:18:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:46.499 12:18:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:46.499 12:18:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:46.499 12:18:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:46.499 12:18:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.499 12:18:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.499 12:18:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.499 12:18:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:46.499 12:18:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:46.499 12:18:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.757 12:18:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.757 12:18:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:46.757 12:18:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:46.757 12:18:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:46.757 12:18:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:46.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:46.757 00:21:46.757 --- 10.0.0.2 ping statistics --- 00:21:46.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.757 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:46.757 12:18:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:46.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:46.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:46.757 00:21:46.757 --- 10.0.0.3 ping statistics --- 00:21:46.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.757 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:46.757 12:18:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:46.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:46.757 00:21:46.757 --- 10.0.0.1 ping statistics --- 00:21:46.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.757 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:46.757 12:18:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.757 12:18:40 -- nvmf/common.sh@422 -- # return 0 00:21:46.757 12:18:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:46.757 12:18:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.757 12:18:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:46.757 12:18:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:46.757 12:18:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.757 12:18:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:46.757 12:18:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:46.757 12:18:40 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:46.757 12:18:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:46.757 12:18:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:46.757 12:18:40 -- common/autotest_common.sh@10 -- # set +x 00:21:46.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.757 12:18:40 -- nvmf/common.sh@470 -- # nvmfpid=69343 00:21:46.757 12:18:40 -- nvmf/common.sh@471 -- # waitforlisten 69343 00:21:46.757 12:18:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:46.757 12:18:40 -- common/autotest_common.sh@817 -- # '[' -z 69343 ']' 00:21:46.757 12:18:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.757 12:18:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:46.757 12:18:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.757 12:18:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:46.757 12:18:40 -- common/autotest_common.sh@10 -- # set +x 00:21:46.757 [2024-04-26 12:18:40.109130] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:46.757 [2024-04-26 12:18:40.109464] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.015 [2024-04-26 12:18:40.253916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.015 [2024-04-26 12:18:40.378708] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.015 [2024-04-26 12:18:40.379009] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.015 [2024-04-26 12:18:40.379381] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.015 [2024-04-26 12:18:40.379544] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.015 [2024-04-26 12:18:40.379720] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.015 [2024-04-26 12:18:40.379900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:47.015 [2024-04-26 12:18:40.380002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:47.015 [2024-04-26 12:18:40.380134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:47.015 [2024-04-26 12:18:40.380143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.579 12:18:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.579 12:18:40 -- common/autotest_common.sh@850 -- # return 0 00:21:47.579 12:18:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.579 12:18:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.579 12:18:40 -- common/autotest_common.sh@10 -- # set +x 00:21:47.579 12:18:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.579 12:18:41 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.579 12:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.579 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:21:47.579 [2024-04-26 12:18:41.027687] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.579 12:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.579 12:18:41 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.579 12:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.579 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:21:47.837 Malloc0 00:21:47.837 12:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.837 12:18:41 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.837 12:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.837 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:21:47.837 12:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.837 12:18:41 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.837 12:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.837 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:21:47.837 12:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.837 12:18:41 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.837 12:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.837 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:21:47.837 [2024-04-26 12:18:41.091693] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.837 12:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.837 12:18:41 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:47.837 12:18:41 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:47.837 12:18:41 -- nvmf/common.sh@521 -- # config=() 00:21:47.837 12:18:41 -- nvmf/common.sh@521 -- # local subsystem config 00:21:47.837 12:18:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:47.837 12:18:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:47.837 { 00:21:47.837 "params": { 00:21:47.837 "name": "Nvme$subsystem", 00:21:47.837 "trtype": "$TEST_TRANSPORT", 00:21:47.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.837 "adrfam": "ipv4", 00:21:47.837 "trsvcid": "$NVMF_PORT", 00:21:47.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.837 "hdgst": ${hdgst:-false}, 00:21:47.837 "ddgst": ${ddgst:-false} 00:21:47.837 }, 00:21:47.837 "method": "bdev_nvme_attach_controller" 00:21:47.837 } 00:21:47.837 EOF 00:21:47.837 )") 00:21:47.837 12:18:41 -- nvmf/common.sh@543 -- # cat 00:21:47.837 12:18:41 -- nvmf/common.sh@545 -- # jq . 00:21:47.837 12:18:41 -- nvmf/common.sh@546 -- # IFS=, 00:21:47.837 12:18:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:47.837 "params": { 00:21:47.837 "name": "Nvme1", 00:21:47.837 "trtype": "tcp", 00:21:47.837 "traddr": "10.0.0.2", 00:21:47.837 "adrfam": "ipv4", 00:21:47.837 "trsvcid": "4420", 00:21:47.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.837 "hdgst": false, 00:21:47.837 "ddgst": false 00:21:47.837 }, 00:21:47.837 "method": "bdev_nvme_attach_controller" 00:21:47.837 }' 00:21:47.837 [2024-04-26 12:18:41.143855] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:47.837 [2024-04-26 12:18:41.143951] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69379 ] 00:21:47.837 [2024-04-26 12:18:41.281055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:48.095 [2024-04-26 12:18:41.402358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.095 [2024-04-26 12:18:41.402463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.095 [2024-04-26 12:18:41.402465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.357 I/O targets: 00:21:48.357 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:48.357 00:21:48.357 00:21:48.357 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.357 http://cunit.sourceforge.net/ 00:21:48.357 00:21:48.357 00:21:48.357 Suite: bdevio tests on: Nvme1n1 00:21:48.357 Test: blockdev write read block ...passed 00:21:48.357 Test: blockdev write zeroes read block ...passed 00:21:48.357 Test: blockdev write zeroes read no split ...passed 00:21:48.357 Test: blockdev write zeroes read split ...passed 00:21:48.357 Test: blockdev write zeroes read split partial ...passed 00:21:48.357 Test: blockdev reset ...[2024-04-26 12:18:41.621321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.357 [2024-04-26 12:18:41.621427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147f6c0 (9): Bad file descriptor 00:21:48.357 [2024-04-26 12:18:41.638085] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:48.357 passed 00:21:48.357 Test: blockdev write read 8 blocks ...passed 00:21:48.357 Test: blockdev write read size > 128k ...passed 00:21:48.357 Test: blockdev write read invalid size ...passed 00:21:48.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:48.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:48.357 Test: blockdev write read max offset ...passed 00:21:48.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:48.357 Test: blockdev writev readv 8 blocks ...passed 00:21:48.357 Test: blockdev writev readv 30 x 1block ...passed 00:21:48.357 Test: blockdev writev readv block ...passed 00:21:48.357 Test: blockdev writev readv size > 128k ...passed 00:21:48.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:48.357 Test: blockdev comparev and writev ...[2024-04-26 12:18:41.648777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.648839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.648865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.648879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.649442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.649488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.649542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.650211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.650235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.650248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.650616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.650654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.650677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.357 [2024-04-26 12:18:41.650690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:48.357 passed 00:21:48.357 Test: blockdev nvme passthru rw ...passed 00:21:48.357 Test: blockdev nvme passthru vendor specific ...[2024-04-26 12:18:41.652092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.357 [2024-04-26 12:18:41.652408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.652776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.357 [2024-04-26 12:18:41.652817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.653137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.357 [2024-04-26 12:18:41.653188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:48.357 [2024-04-26 12:18:41.653555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.357 [2024-04-26 12:18:41.653592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:48.357 passed 00:21:48.357 Test: blockdev nvme admin passthru ...passed 00:21:48.357 Test: blockdev copy ...passed 00:21:48.357 00:21:48.357 Run Summary: Type Total Ran Passed Failed Inactive 00:21:48.357 suites 1 1 n/a 0 0 00:21:48.357 tests 23 23 23 0 0 00:21:48.357 asserts 152 152 152 0 n/a 00:21:48.357 00:21:48.357 Elapsed time = 0.160 seconds 00:21:48.616 12:18:41 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.616 12:18:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.616 12:18:41 -- common/autotest_common.sh@10 -- # set +x 00:21:48.616 12:18:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.616 12:18:41 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:48.616 12:18:41 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:48.616 12:18:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:48.616 12:18:41 -- nvmf/common.sh@117 -- # sync 00:21:48.616 12:18:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.616 12:18:41 -- nvmf/common.sh@120 -- # set +e 00:21:48.616 12:18:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.616 12:18:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.616 rmmod nvme_tcp 00:21:48.616 rmmod nvme_fabrics 00:21:48.616 rmmod nvme_keyring 00:21:48.616 12:18:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.616 12:18:42 -- nvmf/common.sh@124 -- # set -e 00:21:48.616 12:18:42 -- nvmf/common.sh@125 -- # return 0 00:21:48.616 12:18:42 -- nvmf/common.sh@478 -- # '[' -n 69343 ']' 00:21:48.616 12:18:42 -- nvmf/common.sh@479 -- # killprocess 69343 00:21:48.616 12:18:42 -- common/autotest_common.sh@936 -- # '[' -z 69343 ']' 00:21:48.616 12:18:42 -- common/autotest_common.sh@940 -- # kill -0 69343 00:21:48.616 12:18:42 -- common/autotest_common.sh@941 -- # uname 00:21:48.616 12:18:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:48.616 12:18:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69343 00:21:48.616 12:18:42 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:48.616 12:18:42 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:48.616 killing process with pid 69343 00:21:48.616 12:18:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69343' 00:21:48.616 12:18:42 -- common/autotest_common.sh@955 -- # kill 69343 00:21:48.616 12:18:42 -- common/autotest_common.sh@960 -- # wait 69343 00:21:49.184 12:18:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:49.184 12:18:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:49.184 12:18:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.184 12:18:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.184 12:18:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.184 12:18:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.184 12:18:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:49.184 00:21:49.184 real 0m2.799s 00:21:49.184 user 0m9.102s 00:21:49.184 sys 0m0.748s 00:21:49.184 12:18:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:49.184 12:18:42 -- common/autotest_common.sh@10 -- # set +x 00:21:49.184 ************************************ 00:21:49.184 END TEST nvmf_bdevio 00:21:49.184 ************************************ 00:21:49.184 12:18:42 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:21:49.184 12:18:42 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:49.184 12:18:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:49.184 12:18:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:49.184 12:18:42 -- common/autotest_common.sh@10 -- # set +x 00:21:49.184 ************************************ 00:21:49.184 START TEST nvmf_bdevio_no_huge 00:21:49.184 ************************************ 00:21:49.184 12:18:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:49.184 * Looking for test storage... 00:21:49.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:49.184 12:18:42 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.184 12:18:42 -- nvmf/common.sh@7 -- # uname -s 00:21:49.184 12:18:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.184 12:18:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.184 12:18:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.184 12:18:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.184 12:18:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.184 12:18:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.184 12:18:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.184 12:18:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.184 12:18:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.184 12:18:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:49.184 12:18:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:49.184 12:18:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.184 12:18:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.184 12:18:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.184 12:18:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.184 12:18:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.184 12:18:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.184 12:18:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.184 12:18:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.184 12:18:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.184 12:18:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.184 12:18:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.184 12:18:42 -- paths/export.sh@5 -- # export PATH 00:21:49.184 12:18:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.184 12:18:42 -- nvmf/common.sh@47 -- # : 0 00:21:49.184 12:18:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.184 12:18:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.184 12:18:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.184 12:18:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.184 12:18:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.184 12:18:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.184 12:18:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.184 12:18:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.184 12:18:42 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:49.184 12:18:42 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:49.184 12:18:42 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:49.184 12:18:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:49.184 12:18:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.184 12:18:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:49.184 12:18:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:49.184 12:18:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:49.184 12:18:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.184 12:18:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.184 12:18:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.184 12:18:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:49.184 12:18:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:49.184 12:18:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.184 12:18:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.184 12:18:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:49.184 12:18:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:49.184 12:18:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:49.184 12:18:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:49.184 12:18:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:49.184 12:18:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.184 12:18:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:49.184 12:18:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:49.184 12:18:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:49.184 12:18:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:49.184 12:18:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:49.444 12:18:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:49.444 Cannot find device "nvmf_tgt_br" 00:21:49.444 12:18:42 -- nvmf/common.sh@155 -- # true 00:21:49.444 12:18:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.444 Cannot find device "nvmf_tgt_br2" 00:21:49.444 12:18:42 -- nvmf/common.sh@156 -- # true 00:21:49.444 12:18:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:49.444 12:18:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:49.444 Cannot find device "nvmf_tgt_br" 00:21:49.444 12:18:42 -- nvmf/common.sh@158 -- # true 00:21:49.444 12:18:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:49.444 Cannot find device "nvmf_tgt_br2" 00:21:49.444 12:18:42 -- nvmf/common.sh@159 -- # true 00:21:49.444 12:18:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:49.444 12:18:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:49.444 12:18:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.444 12:18:42 -- nvmf/common.sh@162 -- # true 00:21:49.444 12:18:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.444 12:18:42 -- nvmf/common.sh@163 -- # true 00:21:49.444 12:18:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.444 12:18:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.444 12:18:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.444 12:18:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.444 12:18:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.444 12:18:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.444 12:18:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.444 12:18:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:49.444 12:18:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:49.444 12:18:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:49.444 12:18:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:49.444 12:18:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:49.444 12:18:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:49.444 12:18:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.444 12:18:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.444 12:18:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.444 12:18:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:49.444 12:18:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:49.444 12:18:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.703 12:18:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.703 12:18:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.703 12:18:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.703 12:18:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.703 12:18:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:49.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:21:49.703 00:21:49.703 --- 10.0.0.2 ping statistics --- 00:21:49.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.703 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:49.703 12:18:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:49.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:21:49.703 00:21:49.703 --- 10.0.0.3 ping statistics --- 00:21:49.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.703 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:49.703 12:18:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:49.703 00:21:49.703 --- 10.0.0.1 ping statistics --- 00:21:49.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.703 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:49.703 12:18:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.703 12:18:42 -- nvmf/common.sh@422 -- # return 0 00:21:49.703 12:18:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:49.703 12:18:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.703 12:18:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:49.703 12:18:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:49.703 12:18:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.703 12:18:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:49.703 12:18:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:49.703 12:18:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:49.703 12:18:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:49.703 12:18:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:49.703 12:18:42 -- common/autotest_common.sh@10 -- # set +x 00:21:49.703 12:18:42 -- nvmf/common.sh@470 -- # nvmfpid=69563 00:21:49.703 12:18:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:49.703 12:18:42 -- nvmf/common.sh@471 -- # waitforlisten 69563 00:21:49.703 12:18:42 -- common/autotest_common.sh@817 -- # '[' -z 69563 ']' 00:21:49.703 12:18:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.703 12:18:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.703 12:18:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.703 12:18:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.703 12:18:42 -- common/autotest_common.sh@10 -- # set +x 00:21:49.703 [2024-04-26 12:18:43.046366] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:49.703 [2024-04-26 12:18:43.046461] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:49.963 [2024-04-26 12:18:43.197010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.963 [2024-04-26 12:18:43.341708] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.963 [2024-04-26 12:18:43.341784] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.963 [2024-04-26 12:18:43.341798] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.963 [2024-04-26 12:18:43.341809] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.963 [2024-04-26 12:18:43.341818] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.963 [2024-04-26 12:18:43.342002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.963 [2024-04-26 12:18:43.342142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:49.963 [2024-04-26 12:18:43.342737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:49.963 [2024-04-26 12:18:43.342785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.897 12:18:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:50.897 12:18:44 -- common/autotest_common.sh@850 -- # return 0 00:21:50.897 12:18:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:50.897 12:18:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:50.897 12:18:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 12:18:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.897 12:18:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.897 12:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.897 12:18:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 [2024-04-26 12:18:44.051308] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.897 12:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.897 12:18:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.897 12:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.897 12:18:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 Malloc0 00:21:50.897 12:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.897 12:18:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.897 12:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.897 12:18:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 12:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.897 12:18:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.897 12:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.897 12:18:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 12:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.897 12:18:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.897 12:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.897 12:18:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.897 [2024-04-26 12:18:44.095914] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.897 12:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.897 12:18:44 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:50.897 12:18:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:50.897 12:18:44 -- nvmf/common.sh@521 -- # config=() 00:21:50.897 12:18:44 -- nvmf/common.sh@521 -- # local subsystem config 00:21:50.897 12:18:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:50.897 12:18:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:50.897 { 00:21:50.897 "params": { 00:21:50.897 "name": "Nvme$subsystem", 00:21:50.897 "trtype": "$TEST_TRANSPORT", 00:21:50.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.897 "adrfam": "ipv4", 00:21:50.897 "trsvcid": "$NVMF_PORT", 00:21:50.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.897 "hdgst": ${hdgst:-false}, 00:21:50.897 "ddgst": ${ddgst:-false} 00:21:50.897 }, 00:21:50.897 "method": "bdev_nvme_attach_controller" 00:21:50.897 } 00:21:50.897 EOF 00:21:50.897 )") 00:21:50.897 12:18:44 -- nvmf/common.sh@543 -- # cat 00:21:50.897 12:18:44 -- nvmf/common.sh@545 -- # jq . 00:21:50.897 12:18:44 -- nvmf/common.sh@546 -- # IFS=, 00:21:50.897 12:18:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:50.897 "params": { 00:21:50.897 "name": "Nvme1", 00:21:50.897 "trtype": "tcp", 00:21:50.897 "traddr": "10.0.0.2", 00:21:50.897 "adrfam": "ipv4", 00:21:50.897 "trsvcid": "4420", 00:21:50.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.897 "hdgst": false, 00:21:50.897 "ddgst": false 00:21:50.897 }, 00:21:50.897 "method": "bdev_nvme_attach_controller" 00:21:50.897 }' 00:21:50.897 [2024-04-26 12:18:44.182783] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:50.897 [2024-04-26 12:18:44.182922] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69601 ] 00:21:50.898 [2024-04-26 12:18:44.346980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.198 [2024-04-26 12:18:44.533584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.198 [2024-04-26 12:18:44.533722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.198 [2024-04-26 12:18:44.533706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.457 I/O targets: 00:21:51.457 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:51.457 00:21:51.457 00:21:51.457 CUnit - A unit testing framework for C - Version 2.1-3 00:21:51.457 http://cunit.sourceforge.net/ 00:21:51.457 00:21:51.457 00:21:51.457 Suite: bdevio tests on: Nvme1n1 00:21:51.457 Test: blockdev write read block ...passed 00:21:51.457 Test: blockdev write zeroes read block ...passed 00:21:51.457 Test: blockdev write zeroes read no split ...passed 00:21:51.457 Test: blockdev write zeroes read split ...passed 00:21:51.457 Test: blockdev write zeroes read split partial ...passed 00:21:51.457 Test: blockdev reset ...[2024-04-26 12:18:44.740204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.457 [2024-04-26 12:18:44.740316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc584b0 (9): Bad file descriptor 00:21:51.457 [2024-04-26 12:18:44.755857] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:51.457 passed 00:21:51.457 Test: blockdev write read 8 blocks ...passed 00:21:51.457 Test: blockdev write read size > 128k ...passed 00:21:51.457 Test: blockdev write read invalid size ...passed 00:21:51.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.457 Test: blockdev write read max offset ...passed 00:21:51.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.457 Test: blockdev writev readv 8 blocks ...passed 00:21:51.457 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.457 Test: blockdev writev readv block ...passed 00:21:51.458 Test: blockdev writev readv size > 128k ...passed 00:21:51.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.458 Test: blockdev comparev and writev ...[2024-04-26 12:18:44.766460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.766512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.766540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.766563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.766979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.767030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.767052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.767065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.767484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.767643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.767916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.768366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.768405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.768427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.458 [2024-04-26 12:18:44.768440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:51.458 passed 00:21:51.458 Test: blockdev nvme passthru rw ...passed 00:21:51.458 Test: blockdev nvme passthru vendor specific ...[2024-04-26 12:18:44.770240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.458 [2024-04-26 12:18:44.770306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.770787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.458 [2024-04-26 12:18:44.770856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.771352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.458 [2024-04-26 12:18:44.771423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:51.458 [2024-04-26 12:18:44.771831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.458 [2024-04-26 12:18:44.771886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:21:51.458 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:21:51.458 passed 00:21:51.458 Test: blockdev copy ...passed 00:21:51.458 00:21:51.458 Run Summary: Type Total Ran Passed Failed Inactive 00:21:51.458 suites 1 1 n/a 0 0 00:21:51.458 tests 23 23 23 0 0 00:21:51.458 asserts 152 152 152 0 n/a 00:21:51.458 00:21:51.458 Elapsed time = 0.185 seconds 00:21:51.717 12:18:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.717 12:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.717 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:21:51.717 12:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.717 12:18:45 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:51.717 12:18:45 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:51.717 12:18:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:51.717 12:18:45 -- nvmf/common.sh@117 -- # sync 00:21:51.975 12:18:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.975 12:18:45 -- nvmf/common.sh@120 -- # set +e 00:21:51.975 12:18:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.975 12:18:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.975 rmmod nvme_tcp 00:21:51.975 rmmod nvme_fabrics 00:21:51.975 rmmod nvme_keyring 00:21:51.975 12:18:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.975 12:18:45 -- nvmf/common.sh@124 -- # set -e 00:21:51.975 12:18:45 -- nvmf/common.sh@125 -- # return 0 00:21:51.975 12:18:45 -- nvmf/common.sh@478 -- # '[' -n 69563 ']' 00:21:51.975 12:18:45 -- nvmf/common.sh@479 -- # killprocess 69563 00:21:51.975 12:18:45 -- common/autotest_common.sh@936 -- # '[' -z 69563 ']' 00:21:51.975 12:18:45 -- common/autotest_common.sh@940 -- # kill -0 69563 00:21:51.975 12:18:45 -- common/autotest_common.sh@941 -- # uname 00:21:51.975 12:18:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.975 12:18:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69563 00:21:51.975 killing process with pid 69563 00:21:51.975 12:18:45 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:51.976 12:18:45 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:51.976 12:18:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69563' 00:21:51.976 12:18:45 -- common/autotest_common.sh@955 -- # kill 69563 00:21:51.976 12:18:45 -- common/autotest_common.sh@960 -- # wait 69563 00:21:52.543 12:18:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:52.543 12:18:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:52.543 12:18:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:52.543 12:18:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.543 12:18:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.543 12:18:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.543 12:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.543 12:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.543 12:18:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:52.543 00:21:52.543 real 0m3.271s 00:21:52.543 user 0m10.709s 00:21:52.543 sys 0m1.222s 00:21:52.543 12:18:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:52.543 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:21:52.543 ************************************ 00:21:52.543 END TEST nvmf_bdevio_no_huge 00:21:52.543 ************************************ 00:21:52.543 12:18:45 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:52.543 12:18:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:52.543 12:18:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:52.543 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:21:52.543 ************************************ 00:21:52.543 START TEST nvmf_tls 00:21:52.543 ************************************ 00:21:52.543 12:18:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:52.543 * Looking for test storage... 00:21:52.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:52.543 12:18:45 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:52.543 12:18:45 -- nvmf/common.sh@7 -- # uname -s 00:21:52.543 12:18:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.543 12:18:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.543 12:18:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.543 12:18:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.543 12:18:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.543 12:18:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.543 12:18:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.543 12:18:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.543 12:18:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.543 12:18:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.543 12:18:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:52.543 12:18:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:21:52.543 12:18:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.543 12:18:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.543 12:18:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:52.543 12:18:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.543 12:18:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:52.543 12:18:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.543 12:18:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.543 12:18:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.543 12:18:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.543 12:18:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.543 12:18:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.543 12:18:46 -- paths/export.sh@5 -- # export PATH 00:21:52.543 12:18:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.543 12:18:46 -- nvmf/common.sh@47 -- # : 0 00:21:52.543 12:18:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.543 12:18:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.543 12:18:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.543 12:18:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.543 12:18:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.543 12:18:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.543 12:18:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.543 12:18:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.543 12:18:46 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.543 12:18:46 -- target/tls.sh@62 -- # nvmftestinit 00:21:52.543 12:18:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:52.543 12:18:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.543 12:18:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:52.801 12:18:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:52.801 12:18:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:52.801 12:18:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.801 12:18:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.801 12:18:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.801 12:18:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:52.801 12:18:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:52.801 12:18:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:52.801 12:18:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:52.801 12:18:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:52.801 12:18:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:52.801 12:18:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.801 12:18:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.801 12:18:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:52.801 12:18:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:52.801 12:18:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:52.801 12:18:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:52.801 12:18:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:52.801 12:18:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.801 12:18:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:52.801 12:18:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:52.801 12:18:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:52.801 12:18:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:52.801 12:18:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:52.801 12:18:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:52.801 Cannot find device "nvmf_tgt_br" 00:21:52.801 12:18:46 -- nvmf/common.sh@155 -- # true 00:21:52.801 12:18:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:52.801 Cannot find device "nvmf_tgt_br2" 00:21:52.801 12:18:46 -- nvmf/common.sh@156 -- # true 00:21:52.801 12:18:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:52.801 12:18:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:52.801 Cannot find device "nvmf_tgt_br" 00:21:52.801 12:18:46 -- nvmf/common.sh@158 -- # true 00:21:52.801 12:18:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:52.801 Cannot find device "nvmf_tgt_br2" 00:21:52.801 12:18:46 -- nvmf/common.sh@159 -- # true 00:21:52.801 12:18:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:52.801 12:18:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:52.801 12:18:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.801 12:18:46 -- nvmf/common.sh@162 -- # true 00:21:52.801 12:18:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.801 12:18:46 -- nvmf/common.sh@163 -- # true 00:21:52.801 12:18:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:52.801 12:18:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:52.801 12:18:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:52.802 12:18:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:52.802 12:18:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:52.802 12:18:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:52.802 12:18:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:52.802 12:18:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:52.802 12:18:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:52.802 12:18:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:52.802 12:18:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:52.802 12:18:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:52.802 12:18:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:52.802 12:18:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:53.058 12:18:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:53.058 12:18:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:53.058 12:18:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:53.058 12:18:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:53.058 12:18:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:53.058 12:18:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:53.058 12:18:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:53.058 12:18:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:53.058 12:18:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:53.058 12:18:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:53.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:21:53.058 00:21:53.058 --- 10.0.0.2 ping statistics --- 00:21:53.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.058 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:53.058 12:18:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:53.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:53.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:21:53.058 00:21:53.058 --- 10.0.0.3 ping statistics --- 00:21:53.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.058 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:53.058 12:18:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:53.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:53.058 00:21:53.058 --- 10.0.0.1 ping statistics --- 00:21:53.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.058 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:53.058 12:18:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.058 12:18:46 -- nvmf/common.sh@422 -- # return 0 00:21:53.058 12:18:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:53.058 12:18:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.058 12:18:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:53.058 12:18:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:53.058 12:18:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.058 12:18:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:53.058 12:18:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:53.058 12:18:46 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:53.058 12:18:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:53.058 12:18:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:53.058 12:18:46 -- common/autotest_common.sh@10 -- # set +x 00:21:53.058 12:18:46 -- nvmf/common.sh@470 -- # nvmfpid=69786 00:21:53.059 12:18:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:53.059 12:18:46 -- nvmf/common.sh@471 -- # waitforlisten 69786 00:21:53.059 12:18:46 -- common/autotest_common.sh@817 -- # '[' -z 69786 ']' 00:21:53.059 12:18:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.059 12:18:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:53.059 12:18:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.059 12:18:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:53.059 12:18:46 -- common/autotest_common.sh@10 -- # set +x 00:21:53.059 [2024-04-26 12:18:46.430141] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:21:53.059 [2024-04-26 12:18:46.430253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.316 [2024-04-26 12:18:46.572626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.316 [2024-04-26 12:18:46.707383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.316 [2024-04-26 12:18:46.707443] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.316 [2024-04-26 12:18:46.707458] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.316 [2024-04-26 12:18:46.707469] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.316 [2024-04-26 12:18:46.707478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.316 [2024-04-26 12:18:46.707521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.250 12:18:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:54.250 12:18:47 -- common/autotest_common.sh@850 -- # return 0 00:21:54.250 12:18:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:54.250 12:18:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:54.250 12:18:47 -- common/autotest_common.sh@10 -- # set +x 00:21:54.250 12:18:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.250 12:18:47 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:54.250 12:18:47 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:54.507 true 00:21:54.508 12:18:47 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:54.508 12:18:47 -- target/tls.sh@73 -- # jq -r .tls_version 00:21:54.765 12:18:48 -- target/tls.sh@73 -- # version=0 00:21:54.765 12:18:48 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:54.765 12:18:48 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:55.023 12:18:48 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.023 12:18:48 -- target/tls.sh@81 -- # jq -r .tls_version 00:21:55.280 12:18:48 -- target/tls.sh@81 -- # version=13 00:21:55.280 12:18:48 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:55.280 12:18:48 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:55.538 12:18:48 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.538 12:18:48 -- target/tls.sh@89 -- # jq -r .tls_version 00:21:55.796 12:18:49 -- target/tls.sh@89 -- # version=7 00:21:55.796 12:18:49 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:55.796 12:18:49 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.796 12:18:49 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:56.055 12:18:49 -- target/tls.sh@96 -- # ktls=false 00:21:56.055 12:18:49 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:56.055 12:18:49 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:56.313 12:18:49 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.313 12:18:49 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:56.571 12:18:49 -- target/tls.sh@104 -- # ktls=true 00:21:56.571 12:18:49 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:56.571 12:18:49 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:56.833 12:18:50 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.833 12:18:50 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:57.110 12:18:50 -- target/tls.sh@112 -- # ktls=false 00:21:57.110 12:18:50 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:57.110 12:18:50 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:57.110 12:18:50 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:57.110 12:18:50 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:57.110 12:18:50 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:57.110 12:18:50 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:57.110 12:18:50 -- nvmf/common.sh@693 -- # digest=1 00:21:57.110 12:18:50 -- nvmf/common.sh@694 -- # python - 00:21:57.110 12:18:50 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.110 12:18:50 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:57.110 12:18:50 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:57.110 12:18:50 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:57.110 12:18:50 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:57.110 12:18:50 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:21:57.110 12:18:50 -- nvmf/common.sh@693 -- # digest=1 00:21:57.110 12:18:50 -- nvmf/common.sh@694 -- # python - 00:21:57.110 12:18:50 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:57.110 12:18:50 -- target/tls.sh@121 -- # mktemp 00:21:57.110 12:18:50 -- target/tls.sh@121 -- # key_path=/tmp/tmp.hBQOkHOwqY 00:21:57.110 12:18:50 -- target/tls.sh@122 -- # mktemp 00:21:57.110 12:18:50 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.hwc2xxtW4F 00:21:57.110 12:18:50 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.110 12:18:50 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:57.110 12:18:50 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hBQOkHOwqY 00:21:57.110 12:18:50 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hwc2xxtW4F 00:21:57.110 12:18:50 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:57.381 12:18:50 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:57.950 12:18:51 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hBQOkHOwqY 00:21:57.950 12:18:51 -- target/tls.sh@49 -- # local key=/tmp/tmp.hBQOkHOwqY 00:21:57.950 12:18:51 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.208 [2024-04-26 12:18:51.455365] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.208 12:18:51 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.465 12:18:51 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.722 [2024-04-26 12:18:52.115535] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.723 [2024-04-26 12:18:52.115846] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.723 12:18:52 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:58.984 malloc0 00:21:59.251 12:18:52 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:59.518 12:18:52 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hBQOkHOwqY 00:21:59.518 [2024-04-26 12:18:52.960038] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:59.785 12:18:52 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hBQOkHOwqY 00:22:09.749 Initializing NVMe Controllers 00:22:09.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:09.749 Initialization complete. Launching workers. 00:22:09.749 ======================================================== 00:22:09.749 Latency(us) 00:22:09.749 Device Information : IOPS MiB/s Average min max 00:22:09.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7994.30 31.23 8008.24 1515.05 16706.53 00:22:09.749 ======================================================== 00:22:09.749 Total : 7994.30 31.23 8008.24 1515.05 16706.53 00:22:09.749 00:22:09.749 12:19:03 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBQOkHOwqY 00:22:09.749 12:19:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:09.749 12:19:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:09.749 12:19:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:09.749 12:19:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hBQOkHOwqY' 00:22:09.749 12:19:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.749 12:19:03 -- target/tls.sh@28 -- # bdevperf_pid=70024 00:22:09.749 12:19:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.749 12:19:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.749 12:19:03 -- target/tls.sh@31 -- # waitforlisten 70024 /var/tmp/bdevperf.sock 00:22:09.749 12:19:03 -- common/autotest_common.sh@817 -- # '[' -z 70024 ']' 00:22:09.749 12:19:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.749 12:19:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:09.749 12:19:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.749 12:19:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:09.749 12:19:03 -- common/autotest_common.sh@10 -- # set +x 00:22:10.006 [2024-04-26 12:19:03.237043] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:10.006 [2024-04-26 12:19:03.237445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70024 ] 00:22:10.006 [2024-04-26 12:19:03.379283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.262 [2024-04-26 12:19:03.502206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.825 12:19:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:10.825 12:19:04 -- common/autotest_common.sh@850 -- # return 0 00:22:10.825 12:19:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hBQOkHOwqY 00:22:11.084 [2024-04-26 12:19:04.430025] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.084 [2024-04-26 12:19:04.430142] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:11.084 TLSTESTn1 00:22:11.085 12:19:04 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.343 Running I/O for 10 seconds... 00:22:21.310 00:22:21.310 Latency(us) 00:22:21.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.310 Verification LBA range: start 0x0 length 0x2000 00:22:21.310 TLSTESTn1 : 10.03 3375.98 13.19 0.00 0.00 37826.42 8043.05 31933.91 00:22:21.310 =================================================================================================================== 00:22:21.310 Total : 3375.98 13.19 0.00 0.00 37826.42 8043.05 31933.91 00:22:21.310 0 00:22:21.310 12:19:14 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.310 12:19:14 -- target/tls.sh@45 -- # killprocess 70024 00:22:21.310 12:19:14 -- common/autotest_common.sh@936 -- # '[' -z 70024 ']' 00:22:21.310 12:19:14 -- common/autotest_common.sh@940 -- # kill -0 70024 00:22:21.310 12:19:14 -- common/autotest_common.sh@941 -- # uname 00:22:21.310 12:19:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:21.310 12:19:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70024 00:22:21.310 killing process with pid 70024 00:22:21.310 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.310 00:22:21.310 Latency(us) 00:22:21.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.310 =================================================================================================================== 00:22:21.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.310 12:19:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:21.310 12:19:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:21.310 12:19:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70024' 00:22:21.310 12:19:14 -- common/autotest_common.sh@955 -- # kill 70024 00:22:21.310 [2024-04-26 12:19:14.686703] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.310 12:19:14 -- common/autotest_common.sh@960 -- # wait 70024 00:22:21.569 12:19:14 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwc2xxtW4F 00:22:21.569 12:19:14 -- common/autotest_common.sh@638 -- # local es=0 00:22:21.569 12:19:14 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwc2xxtW4F 00:22:21.569 12:19:14 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:21.569 12:19:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:21.569 12:19:14 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:21.569 12:19:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:21.569 12:19:14 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwc2xxtW4F 00:22:21.569 12:19:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.569 12:19:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.569 12:19:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:21.569 12:19:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hwc2xxtW4F' 00:22:21.569 12:19:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.569 12:19:14 -- target/tls.sh@28 -- # bdevperf_pid=70158 00:22:21.569 12:19:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.569 12:19:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.569 12:19:14 -- target/tls.sh@31 -- # waitforlisten 70158 /var/tmp/bdevperf.sock 00:22:21.569 12:19:14 -- common/autotest_common.sh@817 -- # '[' -z 70158 ']' 00:22:21.569 12:19:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.569 12:19:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:21.569 12:19:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.569 12:19:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:21.569 12:19:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.569 [2024-04-26 12:19:14.997843] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:21.569 [2024-04-26 12:19:14.998257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70158 ] 00:22:21.828 [2024-04-26 12:19:15.136163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.828 [2024-04-26 12:19:15.246110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.762 12:19:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:22.762 12:19:15 -- common/autotest_common.sh@850 -- # return 0 00:22:22.762 12:19:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hwc2xxtW4F 00:22:22.762 [2024-04-26 12:19:16.201136] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.762 [2024-04-26 12:19:16.201725] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.762 [2024-04-26 12:19:16.210010] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:22.762 [2024-04-26 12:19:16.210665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237cae0 (107): Transport endpoint is not connected 00:22:22.762 [2024-04-26 12:19:16.211652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237cae0 (9): Bad file descriptor 00:22:22.762 [2024-04-26 12:19:16.212648] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.762 [2024-04-26 12:19:16.212676] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:22.762 [2024-04-26 12:19:16.212707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.762 request: 00:22:22.762 { 00:22:22.762 "name": "TLSTEST", 00:22:22.762 "trtype": "tcp", 00:22:22.762 "traddr": "10.0.0.2", 00:22:22.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.762 "adrfam": "ipv4", 00:22:22.762 "trsvcid": "4420", 00:22:22.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.762 "psk": "/tmp/tmp.hwc2xxtW4F", 00:22:22.762 "method": "bdev_nvme_attach_controller", 00:22:22.762 "req_id": 1 00:22:22.762 } 00:22:22.762 Got JSON-RPC error response 00:22:22.762 response: 00:22:22.762 { 00:22:22.762 "code": -32602, 00:22:22.762 "message": "Invalid parameters" 00:22:22.762 } 00:22:23.021 12:19:16 -- target/tls.sh@36 -- # killprocess 70158 00:22:23.022 12:19:16 -- common/autotest_common.sh@936 -- # '[' -z 70158 ']' 00:22:23.022 12:19:16 -- common/autotest_common.sh@940 -- # kill -0 70158 00:22:23.022 12:19:16 -- common/autotest_common.sh@941 -- # uname 00:22:23.022 12:19:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:23.022 12:19:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70158 00:22:23.022 killing process with pid 70158 00:22:23.022 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.022 00:22:23.022 Latency(us) 00:22:23.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.022 =================================================================================================================== 00:22:23.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.022 12:19:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:23.022 12:19:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:23.022 12:19:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70158' 00:22:23.022 12:19:16 -- common/autotest_common.sh@955 -- # kill 70158 00:22:23.022 [2024-04-26 12:19:16.262698] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:23.022 12:19:16 -- common/autotest_common.sh@960 -- # wait 70158 00:22:23.281 12:19:16 -- target/tls.sh@37 -- # return 1 00:22:23.281 12:19:16 -- common/autotest_common.sh@641 -- # es=1 00:22:23.281 12:19:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:23.281 12:19:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:23.281 12:19:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:23.281 12:19:16 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hBQOkHOwqY 00:22:23.281 12:19:16 -- common/autotest_common.sh@638 -- # local es=0 00:22:23.281 12:19:16 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hBQOkHOwqY 00:22:23.281 12:19:16 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:23.281 12:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:23.281 12:19:16 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:23.281 12:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:23.281 12:19:16 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hBQOkHOwqY 00:22:23.281 12:19:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.281 12:19:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:23.281 12:19:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:23.281 12:19:16 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hBQOkHOwqY' 00:22:23.281 12:19:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.281 12:19:16 -- target/tls.sh@28 -- # bdevperf_pid=70185 00:22:23.281 12:19:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.281 12:19:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.281 12:19:16 -- target/tls.sh@31 -- # waitforlisten 70185 /var/tmp/bdevperf.sock 00:22:23.281 12:19:16 -- common/autotest_common.sh@817 -- # '[' -z 70185 ']' 00:22:23.281 12:19:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.281 12:19:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:23.281 12:19:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.281 12:19:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:23.281 12:19:16 -- common/autotest_common.sh@10 -- # set +x 00:22:23.281 [2024-04-26 12:19:16.576007] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:23.281 [2024-04-26 12:19:16.576445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70185 ] 00:22:23.281 [2024-04-26 12:19:16.713759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.540 [2024-04-26 12:19:16.833373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.475 12:19:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:24.475 12:19:17 -- common/autotest_common.sh@850 -- # return 0 00:22:24.475 12:19:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hBQOkHOwqY 00:22:24.475 [2024-04-26 12:19:17.846533] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.475 [2024-04-26 12:19:17.847279] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:24.475 [2024-04-26 12:19:17.852289] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:24.475 [2024-04-26 12:19:17.852337] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:24.475 [2024-04-26 12:19:17.852400] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:24.475 [2024-04-26 12:19:17.852984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf44ae0 (107): Transport endpoint is not connected 00:22:24.475 [2024-04-26 12:19:17.853975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf44ae0 (9): Bad file descriptor 00:22:24.475 [2024-04-26 12:19:17.854970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.475 [2024-04-26 12:19:17.855002] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:24.475 [2024-04-26 12:19:17.855019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.475 request: 00:22:24.475 { 00:22:24.475 "name": "TLSTEST", 00:22:24.475 "trtype": "tcp", 00:22:24.475 "traddr": "10.0.0.2", 00:22:24.475 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.475 "adrfam": "ipv4", 00:22:24.475 "trsvcid": "4420", 00:22:24.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.475 "psk": "/tmp/tmp.hBQOkHOwqY", 00:22:24.475 "method": "bdev_nvme_attach_controller", 00:22:24.475 "req_id": 1 00:22:24.475 } 00:22:24.475 Got JSON-RPC error response 00:22:24.475 response: 00:22:24.475 { 00:22:24.475 "code": -32602, 00:22:24.475 "message": "Invalid parameters" 00:22:24.475 } 00:22:24.475 12:19:17 -- target/tls.sh@36 -- # killprocess 70185 00:22:24.475 12:19:17 -- common/autotest_common.sh@936 -- # '[' -z 70185 ']' 00:22:24.475 12:19:17 -- common/autotest_common.sh@940 -- # kill -0 70185 00:22:24.475 12:19:17 -- common/autotest_common.sh@941 -- # uname 00:22:24.475 12:19:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.475 12:19:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70185 00:22:24.475 12:19:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:24.475 12:19:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:24.475 12:19:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70185' 00:22:24.475 killing process with pid 70185 00:22:24.475 12:19:17 -- common/autotest_common.sh@955 -- # kill 70185 00:22:24.475 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.475 00:22:24.475 Latency(us) 00:22:24.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.475 =================================================================================================================== 00:22:24.475 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.475 12:19:17 -- common/autotest_common.sh@960 -- # wait 70185 00:22:24.475 [2024-04-26 12:19:17.903953] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:24.733 12:19:18 -- target/tls.sh@37 -- # return 1 00:22:24.733 12:19:18 -- common/autotest_common.sh@641 -- # es=1 00:22:24.733 12:19:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:24.733 12:19:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:24.733 12:19:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:24.733 12:19:18 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBQOkHOwqY 00:22:24.733 12:19:18 -- common/autotest_common.sh@638 -- # local es=0 00:22:24.733 12:19:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBQOkHOwqY 00:22:24.733 12:19:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:24.733 12:19:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:24.733 12:19:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:24.733 12:19:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:24.733 12:19:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBQOkHOwqY 00:22:24.733 12:19:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:24.733 12:19:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:24.733 12:19:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:24.733 12:19:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hBQOkHOwqY' 00:22:24.733 12:19:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.733 12:19:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.733 12:19:18 -- target/tls.sh@28 -- # bdevperf_pid=70213 00:22:24.733 12:19:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.733 12:19:18 -- target/tls.sh@31 -- # waitforlisten 70213 /var/tmp/bdevperf.sock 00:22:24.733 12:19:18 -- common/autotest_common.sh@817 -- # '[' -z 70213 ']' 00:22:24.733 12:19:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.733 12:19:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:24.733 12:19:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.733 12:19:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:24.733 12:19:18 -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 [2024-04-26 12:19:18.206807] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:24.991 [2024-04-26 12:19:18.206929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70213 ] 00:22:24.991 [2024-04-26 12:19:18.344435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.991 [2024-04-26 12:19:18.440130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.926 12:19:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:25.926 12:19:19 -- common/autotest_common.sh@850 -- # return 0 00:22:25.926 12:19:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hBQOkHOwqY 00:22:25.926 [2024-04-26 12:19:19.370263] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.926 [2024-04-26 12:19:19.370380] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:25.926 [2024-04-26 12:19:19.380385] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:25.926 [2024-04-26 12:19:19.380439] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:25.926 [2024-04-26 12:19:19.380501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.926 [2024-04-26 12:19:19.381296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb17ae0 (107): Transport endpoint is not connected 00:22:25.926 [2024-04-26 12:19:19.382278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb17ae0 (9): Bad file descriptor 00:22:25.926 [2024-04-26 12:19:19.383287] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:25.926 [2024-04-26 12:19:19.383321] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:25.927 [2024-04-26 12:19:19.383338] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:25.927 request: 00:22:25.927 { 00:22:25.927 "name": "TLSTEST", 00:22:25.927 "trtype": "tcp", 00:22:25.927 "traddr": "10.0.0.2", 00:22:25.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.927 "adrfam": "ipv4", 00:22:25.927 "trsvcid": "4420", 00:22:25.927 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.927 "psk": "/tmp/tmp.hBQOkHOwqY", 00:22:25.927 "method": "bdev_nvme_attach_controller", 00:22:25.927 "req_id": 1 00:22:25.927 } 00:22:25.927 Got JSON-RPC error response 00:22:25.927 response: 00:22:25.927 { 00:22:25.927 "code": -32602, 00:22:25.927 "message": "Invalid parameters" 00:22:25.927 } 00:22:26.185 12:19:19 -- target/tls.sh@36 -- # killprocess 70213 00:22:26.185 12:19:19 -- common/autotest_common.sh@936 -- # '[' -z 70213 ']' 00:22:26.185 12:19:19 -- common/autotest_common.sh@940 -- # kill -0 70213 00:22:26.185 12:19:19 -- common/autotest_common.sh@941 -- # uname 00:22:26.185 12:19:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.185 12:19:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70213 00:22:26.185 killing process with pid 70213 00:22:26.185 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.185 00:22:26.185 Latency(us) 00:22:26.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.185 =================================================================================================================== 00:22:26.185 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.185 12:19:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:26.185 12:19:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:26.185 12:19:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70213' 00:22:26.185 12:19:19 -- common/autotest_common.sh@955 -- # kill 70213 00:22:26.185 [2024-04-26 12:19:19.431572] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:26.185 12:19:19 -- common/autotest_common.sh@960 -- # wait 70213 00:22:26.445 12:19:19 -- target/tls.sh@37 -- # return 1 00:22:26.445 12:19:19 -- common/autotest_common.sh@641 -- # es=1 00:22:26.445 12:19:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:26.445 12:19:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:26.445 12:19:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:26.445 12:19:19 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.445 12:19:19 -- common/autotest_common.sh@638 -- # local es=0 00:22:26.445 12:19:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.445 12:19:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:26.445 12:19:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:26.445 12:19:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:26.445 12:19:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:26.445 12:19:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:26.445 12:19:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.445 12:19:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.445 12:19:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.445 12:19:19 -- target/tls.sh@23 -- # psk= 00:22:26.445 12:19:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.445 12:19:19 -- target/tls.sh@28 -- # bdevperf_pid=70242 00:22:26.445 12:19:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.445 12:19:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.445 12:19:19 -- target/tls.sh@31 -- # waitforlisten 70242 /var/tmp/bdevperf.sock 00:22:26.445 12:19:19 -- common/autotest_common.sh@817 -- # '[' -z 70242 ']' 00:22:26.445 12:19:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.445 12:19:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:26.446 12:19:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.446 12:19:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:26.446 12:19:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.446 [2024-04-26 12:19:19.732472] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:26.446 [2024-04-26 12:19:19.732787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70242 ] 00:22:26.446 [2024-04-26 12:19:19.871313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.707 [2024-04-26 12:19:20.001182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.276 12:19:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:27.276 12:19:20 -- common/autotest_common.sh@850 -- # return 0 00:22:27.276 12:19:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:27.535 [2024-04-26 12:19:20.870618] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.535 [2024-04-26 12:19:20.873003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a43e20 (9): Bad file descriptor 00:22:27.535 [2024-04-26 12:19:20.873999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.535 [2024-04-26 12:19:20.874030] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.535 [2024-04-26 12:19:20.874049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.535 request: 00:22:27.535 { 00:22:27.535 "name": "TLSTEST", 00:22:27.535 "trtype": "tcp", 00:22:27.535 "traddr": "10.0.0.2", 00:22:27.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.535 "adrfam": "ipv4", 00:22:27.535 "trsvcid": "4420", 00:22:27.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.535 "method": "bdev_nvme_attach_controller", 00:22:27.535 "req_id": 1 00:22:27.535 } 00:22:27.535 Got JSON-RPC error response 00:22:27.535 response: 00:22:27.535 { 00:22:27.535 "code": -32602, 00:22:27.535 "message": "Invalid parameters" 00:22:27.535 } 00:22:27.535 12:19:20 -- target/tls.sh@36 -- # killprocess 70242 00:22:27.535 12:19:20 -- common/autotest_common.sh@936 -- # '[' -z 70242 ']' 00:22:27.535 12:19:20 -- common/autotest_common.sh@940 -- # kill -0 70242 00:22:27.535 12:19:20 -- common/autotest_common.sh@941 -- # uname 00:22:27.535 12:19:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.535 12:19:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70242 00:22:27.535 killing process with pid 70242 00:22:27.535 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.535 00:22:27.535 Latency(us) 00:22:27.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.535 =================================================================================================================== 00:22:27.535 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.535 12:19:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:27.535 12:19:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:27.535 12:19:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70242' 00:22:27.535 12:19:20 -- common/autotest_common.sh@955 -- # kill 70242 00:22:27.535 12:19:20 -- common/autotest_common.sh@960 -- # wait 70242 00:22:27.793 12:19:21 -- target/tls.sh@37 -- # return 1 00:22:27.793 12:19:21 -- common/autotest_common.sh@641 -- # es=1 00:22:27.793 12:19:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:27.793 12:19:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:27.793 12:19:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:27.793 12:19:21 -- target/tls.sh@158 -- # killprocess 69786 00:22:27.793 12:19:21 -- common/autotest_common.sh@936 -- # '[' -z 69786 ']' 00:22:27.793 12:19:21 -- common/autotest_common.sh@940 -- # kill -0 69786 00:22:27.793 12:19:21 -- common/autotest_common.sh@941 -- # uname 00:22:27.793 12:19:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.793 12:19:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69786 00:22:27.793 killing process with pid 69786 00:22:27.793 12:19:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:27.793 12:19:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:27.793 12:19:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69786' 00:22:27.793 12:19:21 -- common/autotest_common.sh@955 -- # kill 69786 00:22:27.793 [2024-04-26 12:19:21.195485] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:27.793 12:19:21 -- common/autotest_common.sh@960 -- # wait 69786 00:22:28.052 12:19:21 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:28.052 12:19:21 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:28.052 12:19:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:28.052 12:19:21 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:28.052 12:19:21 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:28.052 12:19:21 -- nvmf/common.sh@693 -- # digest=2 00:22:28.052 12:19:21 -- nvmf/common.sh@694 -- # python - 00:22:28.052 12:19:21 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:28.052 12:19:21 -- target/tls.sh@160 -- # mktemp 00:22:28.052 12:19:21 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.xj7XJu2Fhn 00:22:28.052 12:19:21 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:28.052 12:19:21 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.xj7XJu2Fhn 00:22:28.052 12:19:21 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:28.052 12:19:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:28.052 12:19:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:28.052 12:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:28.311 12:19:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:28.311 12:19:21 -- nvmf/common.sh@470 -- # nvmfpid=70278 00:22:28.311 12:19:21 -- nvmf/common.sh@471 -- # waitforlisten 70278 00:22:28.311 12:19:21 -- common/autotest_common.sh@817 -- # '[' -z 70278 ']' 00:22:28.311 12:19:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.311 12:19:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.311 12:19:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.311 12:19:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.311 12:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:28.311 [2024-04-26 12:19:21.578348] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:28.311 [2024-04-26 12:19:21.578687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.311 [2024-04-26 12:19:21.713206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.570 [2024-04-26 12:19:21.832702] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.570 [2024-04-26 12:19:21.833017] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.570 [2024-04-26 12:19:21.833255] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.570 [2024-04-26 12:19:21.833407] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.570 [2024-04-26 12:19:21.833511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.570 [2024-04-26 12:19:21.833599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.136 12:19:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:29.136 12:19:22 -- common/autotest_common.sh@850 -- # return 0 00:22:29.136 12:19:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:29.136 12:19:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:29.136 12:19:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.136 12:19:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.136 12:19:22 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.xj7XJu2Fhn 00:22:29.136 12:19:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.xj7XJu2Fhn 00:22:29.136 12:19:22 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.394 [2024-04-26 12:19:22.770105] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.394 12:19:22 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.652 12:19:23 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:29.920 [2024-04-26 12:19:23.214241] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.920 [2024-04-26 12:19:23.214705] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.920 12:19:23 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.203 malloc0 00:22:30.203 12:19:23 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.462 12:19:23 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:22:30.462 [2024-04-26 12:19:23.921519] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:30.721 12:19:23 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xj7XJu2Fhn 00:22:30.721 12:19:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.721 12:19:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.721 12:19:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:30.721 12:19:23 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xj7XJu2Fhn' 00:22:30.721 12:19:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.721 12:19:23 -- target/tls.sh@28 -- # bdevperf_pid=70335 00:22:30.721 12:19:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.721 12:19:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.721 12:19:23 -- target/tls.sh@31 -- # waitforlisten 70335 /var/tmp/bdevperf.sock 00:22:30.721 12:19:23 -- common/autotest_common.sh@817 -- # '[' -z 70335 ']' 00:22:30.721 12:19:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.721 12:19:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:30.721 12:19:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.721 12:19:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:30.721 12:19:23 -- common/autotest_common.sh@10 -- # set +x 00:22:30.721 [2024-04-26 12:19:23.987840] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:30.721 [2024-04-26 12:19:23.988258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70335 ] 00:22:30.721 [2024-04-26 12:19:24.126714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.979 [2024-04-26 12:19:24.253531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.546 12:19:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:31.546 12:19:24 -- common/autotest_common.sh@850 -- # return 0 00:22:31.546 12:19:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:22:31.805 [2024-04-26 12:19:25.092955] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.805 [2024-04-26 12:19:25.093544] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:31.805 TLSTESTn1 00:22:31.805 12:19:25 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:32.064 Running I/O for 10 seconds... 00:22:42.041 00:22:42.041 Latency(us) 00:22:42.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.041 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.041 Verification LBA range: start 0x0 length 0x2000 00:22:42.041 TLSTESTn1 : 10.03 3303.67 12.90 0.00 0.00 38661.60 7626.01 29789.09 00:22:42.041 =================================================================================================================== 00:22:42.041 Total : 3303.67 12.90 0.00 0.00 38661.60 7626.01 29789.09 00:22:42.041 0 00:22:42.041 12:19:35 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.041 12:19:35 -- target/tls.sh@45 -- # killprocess 70335 00:22:42.041 12:19:35 -- common/autotest_common.sh@936 -- # '[' -z 70335 ']' 00:22:42.041 12:19:35 -- common/autotest_common.sh@940 -- # kill -0 70335 00:22:42.041 12:19:35 -- common/autotest_common.sh@941 -- # uname 00:22:42.041 12:19:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.041 12:19:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70335 00:22:42.041 killing process with pid 70335 00:22:42.041 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.041 00:22:42.041 Latency(us) 00:22:42.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.041 =================================================================================================================== 00:22:42.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.041 12:19:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:42.041 12:19:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:42.041 12:19:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70335' 00:22:42.041 12:19:35 -- common/autotest_common.sh@955 -- # kill 70335 00:22:42.041 [2024-04-26 12:19:35.353178] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:42.041 12:19:35 -- common/autotest_common.sh@960 -- # wait 70335 00:22:42.299 12:19:35 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.xj7XJu2Fhn 00:22:42.299 12:19:35 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xj7XJu2Fhn 00:22:42.299 12:19:35 -- common/autotest_common.sh@638 -- # local es=0 00:22:42.299 12:19:35 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xj7XJu2Fhn 00:22:42.299 12:19:35 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:42.299 12:19:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.299 12:19:35 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:42.299 12:19:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.299 12:19:35 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xj7XJu2Fhn 00:22:42.299 12:19:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:42.299 12:19:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:42.299 12:19:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:42.299 12:19:35 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xj7XJu2Fhn' 00:22:42.299 12:19:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.299 12:19:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:42.299 12:19:35 -- target/tls.sh@28 -- # bdevperf_pid=70470 00:22:42.299 12:19:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:42.299 12:19:35 -- target/tls.sh@31 -- # waitforlisten 70470 /var/tmp/bdevperf.sock 00:22:42.299 12:19:35 -- common/autotest_common.sh@817 -- # '[' -z 70470 ']' 00:22:42.299 12:19:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.299 12:19:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:42.299 12:19:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.299 12:19:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:42.299 12:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:42.299 [2024-04-26 12:19:35.665915] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:42.299 [2024-04-26 12:19:35.666547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70470 ] 00:22:42.557 [2024-04-26 12:19:35.800380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.557 [2024-04-26 12:19:35.934405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.123 12:19:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.123 12:19:36 -- common/autotest_common.sh@850 -- # return 0 00:22:43.123 12:19:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:22:43.690 [2024-04-26 12:19:36.888118] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.690 [2024-04-26 12:19:36.888250] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:43.690 [2024-04-26 12:19:36.888263] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.xj7XJu2Fhn 00:22:43.690 request: 00:22:43.690 { 00:22:43.690 "name": "TLSTEST", 00:22:43.690 "trtype": "tcp", 00:22:43.690 "traddr": "10.0.0.2", 00:22:43.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.690 "adrfam": "ipv4", 00:22:43.690 "trsvcid": "4420", 00:22:43.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.690 "psk": "/tmp/tmp.xj7XJu2Fhn", 00:22:43.690 "method": "bdev_nvme_attach_controller", 00:22:43.690 "req_id": 1 00:22:43.690 } 00:22:43.690 Got JSON-RPC error response 00:22:43.690 response: 00:22:43.690 { 00:22:43.690 "code": -1, 00:22:43.690 "message": "Operation not permitted" 00:22:43.690 } 00:22:43.690 12:19:36 -- target/tls.sh@36 -- # killprocess 70470 00:22:43.690 12:19:36 -- common/autotest_common.sh@936 -- # '[' -z 70470 ']' 00:22:43.690 12:19:36 -- common/autotest_common.sh@940 -- # kill -0 70470 00:22:43.690 12:19:36 -- common/autotest_common.sh@941 -- # uname 00:22:43.690 12:19:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.690 12:19:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70470 00:22:43.690 12:19:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:43.690 12:19:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:43.690 12:19:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70470' 00:22:43.690 killing process with pid 70470 00:22:43.690 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.690 00:22:43.690 Latency(us) 00:22:43.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.690 =================================================================================================================== 00:22:43.690 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.690 12:19:36 -- common/autotest_common.sh@955 -- # kill 70470 00:22:43.690 12:19:36 -- common/autotest_common.sh@960 -- # wait 70470 00:22:43.949 12:19:37 -- target/tls.sh@37 -- # return 1 00:22:43.949 12:19:37 -- common/autotest_common.sh@641 -- # es=1 00:22:43.949 12:19:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:43.949 12:19:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:43.949 12:19:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:43.949 12:19:37 -- target/tls.sh@174 -- # killprocess 70278 00:22:43.949 12:19:37 -- common/autotest_common.sh@936 -- # '[' -z 70278 ']' 00:22:43.949 12:19:37 -- common/autotest_common.sh@940 -- # kill -0 70278 00:22:43.949 12:19:37 -- common/autotest_common.sh@941 -- # uname 00:22:43.949 12:19:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.949 12:19:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70278 00:22:43.949 killing process with pid 70278 00:22:43.949 12:19:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.949 12:19:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.949 12:19:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70278' 00:22:43.949 12:19:37 -- common/autotest_common.sh@955 -- # kill 70278 00:22:43.949 [2024-04-26 12:19:37.236384] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.949 12:19:37 -- common/autotest_common.sh@960 -- # wait 70278 00:22:44.208 12:19:37 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:44.208 12:19:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:44.208 12:19:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:44.208 12:19:37 -- common/autotest_common.sh@10 -- # set +x 00:22:44.208 12:19:37 -- nvmf/common.sh@470 -- # nvmfpid=70504 00:22:44.208 12:19:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.208 12:19:37 -- nvmf/common.sh@471 -- # waitforlisten 70504 00:22:44.208 12:19:37 -- common/autotest_common.sh@817 -- # '[' -z 70504 ']' 00:22:44.208 12:19:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.208 12:19:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:44.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.208 12:19:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.208 12:19:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:44.208 12:19:37 -- common/autotest_common.sh@10 -- # set +x 00:22:44.208 [2024-04-26 12:19:37.584416] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:44.208 [2024-04-26 12:19:37.584520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.467 [2024-04-26 12:19:37.723630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.467 [2024-04-26 12:19:37.855610] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.467 [2024-04-26 12:19:37.855706] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.467 [2024-04-26 12:19:37.855720] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.467 [2024-04-26 12:19:37.855731] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.467 [2024-04-26 12:19:37.855741] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.467 [2024-04-26 12:19:37.855782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.726 12:19:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:44.726 12:19:37 -- common/autotest_common.sh@850 -- # return 0 00:22:44.726 12:19:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:44.726 12:19:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:44.726 12:19:37 -- common/autotest_common.sh@10 -- # set +x 00:22:44.726 12:19:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.726 12:19:38 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.xj7XJu2Fhn 00:22:44.726 12:19:38 -- common/autotest_common.sh@638 -- # local es=0 00:22:44.726 12:19:38 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xj7XJu2Fhn 00:22:44.726 12:19:38 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:22:44.726 12:19:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:44.726 12:19:38 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:22:44.726 12:19:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:44.726 12:19:38 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.xj7XJu2Fhn 00:22:44.726 12:19:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.xj7XJu2Fhn 00:22:44.726 12:19:38 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.985 [2024-04-26 12:19:38.331428] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.985 12:19:38 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.255 12:19:38 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.530 [2024-04-26 12:19:38.831564] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.530 [2024-04-26 12:19:38.831815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.530 12:19:38 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:45.787 malloc0 00:22:45.787 12:19:39 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.046 12:19:39 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:22:46.304 [2024-04-26 12:19:39.655247] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:46.304 [2024-04-26 12:19:39.655352] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:46.304 [2024-04-26 12:19:39.655384] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:46.304 request: 00:22:46.304 { 00:22:46.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.304 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.304 "psk": "/tmp/tmp.xj7XJu2Fhn", 00:22:46.304 "method": "nvmf_subsystem_add_host", 00:22:46.304 "req_id": 1 00:22:46.304 } 00:22:46.304 Got JSON-RPC error response 00:22:46.304 response: 00:22:46.304 { 00:22:46.304 "code": -32603, 00:22:46.304 "message": "Internal error" 00:22:46.304 } 00:22:46.304 12:19:39 -- common/autotest_common.sh@641 -- # es=1 00:22:46.304 12:19:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:46.304 12:19:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:46.304 12:19:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:46.304 12:19:39 -- target/tls.sh@180 -- # killprocess 70504 00:22:46.304 12:19:39 -- common/autotest_common.sh@936 -- # '[' -z 70504 ']' 00:22:46.304 12:19:39 -- common/autotest_common.sh@940 -- # kill -0 70504 00:22:46.304 12:19:39 -- common/autotest_common.sh@941 -- # uname 00:22:46.304 12:19:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.304 12:19:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70504 00:22:46.304 12:19:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:46.304 killing process with pid 70504 00:22:46.304 12:19:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:46.304 12:19:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70504' 00:22:46.304 12:19:39 -- common/autotest_common.sh@955 -- # kill 70504 00:22:46.304 12:19:39 -- common/autotest_common.sh@960 -- # wait 70504 00:22:46.563 12:19:39 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.xj7XJu2Fhn 00:22:46.563 12:19:39 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:46.563 12:19:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:46.563 12:19:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:46.563 12:19:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.563 12:19:39 -- nvmf/common.sh@470 -- # nvmfpid=70563 00:22:46.563 12:19:39 -- nvmf/common.sh@471 -- # waitforlisten 70563 00:22:46.563 12:19:39 -- common/autotest_common.sh@817 -- # '[' -z 70563 ']' 00:22:46.563 12:19:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.563 12:19:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:46.563 12:19:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.563 12:19:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.563 12:19:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:46.563 12:19:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.821 [2024-04-26 12:19:40.052269] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:46.821 [2024-04-26 12:19:40.052376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.821 [2024-04-26 12:19:40.190888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.080 [2024-04-26 12:19:40.310936] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.080 [2024-04-26 12:19:40.311010] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.080 [2024-04-26 12:19:40.311024] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.080 [2024-04-26 12:19:40.311032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.080 [2024-04-26 12:19:40.311040] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.080 [2024-04-26 12:19:40.311079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.646 12:19:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:47.646 12:19:41 -- common/autotest_common.sh@850 -- # return 0 00:22:47.646 12:19:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:47.646 12:19:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:47.646 12:19:41 -- common/autotest_common.sh@10 -- # set +x 00:22:47.646 12:19:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.646 12:19:41 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.xj7XJu2Fhn 00:22:47.646 12:19:41 -- target/tls.sh@49 -- # local key=/tmp/tmp.xj7XJu2Fhn 00:22:47.646 12:19:41 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:47.904 [2024-04-26 12:19:41.360726] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.163 12:19:41 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.421 12:19:41 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.680 [2024-04-26 12:19:41.944857] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.680 [2024-04-26 12:19:41.945518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.680 12:19:41 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:48.938 malloc0 00:22:48.938 12:19:42 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.196 12:19:42 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:22:49.455 [2024-04-26 12:19:42.805323] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.455 12:19:42 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.455 12:19:42 -- target/tls.sh@188 -- # bdevperf_pid=70618 00:22:49.455 12:19:42 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.455 12:19:42 -- target/tls.sh@191 -- # waitforlisten 70618 /var/tmp/bdevperf.sock 00:22:49.455 12:19:42 -- common/autotest_common.sh@817 -- # '[' -z 70618 ']' 00:22:49.455 12:19:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.455 12:19:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:49.455 12:19:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.455 12:19:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:49.455 12:19:42 -- common/autotest_common.sh@10 -- # set +x 00:22:49.455 [2024-04-26 12:19:42.880893] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:49.455 [2024-04-26 12:19:42.881829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70618 ] 00:22:49.713 [2024-04-26 12:19:43.021935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.713 [2024-04-26 12:19:43.156916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.652 12:19:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:50.652 12:19:43 -- common/autotest_common.sh@850 -- # return 0 00:22:50.652 12:19:43 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:22:50.652 [2024-04-26 12:19:44.061233] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.652 [2024-04-26 12:19:44.061404] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:50.911 TLSTESTn1 00:22:50.911 12:19:44 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:51.170 12:19:44 -- target/tls.sh@196 -- # tgtconf='{ 00:22:51.170 "subsystems": [ 00:22:51.170 { 00:22:51.170 "subsystem": "keyring", 00:22:51.170 "config": [] 00:22:51.170 }, 00:22:51.170 { 00:22:51.170 "subsystem": "iobuf", 00:22:51.170 "config": [ 00:22:51.170 { 00:22:51.170 "method": "iobuf_set_options", 00:22:51.170 "params": { 00:22:51.170 "small_pool_count": 8192, 00:22:51.170 "large_pool_count": 1024, 00:22:51.170 "small_bufsize": 8192, 00:22:51.170 "large_bufsize": 135168 00:22:51.170 } 00:22:51.170 } 00:22:51.170 ] 00:22:51.170 }, 00:22:51.170 { 00:22:51.170 "subsystem": "sock", 00:22:51.170 "config": [ 00:22:51.170 { 00:22:51.170 "method": "sock_impl_set_options", 00:22:51.170 "params": { 00:22:51.170 "impl_name": "uring", 00:22:51.170 "recv_buf_size": 2097152, 00:22:51.170 "send_buf_size": 2097152, 00:22:51.170 "enable_recv_pipe": true, 00:22:51.170 "enable_quickack": false, 00:22:51.170 "enable_placement_id": 0, 00:22:51.170 "enable_zerocopy_send_server": false, 00:22:51.170 "enable_zerocopy_send_client": false, 00:22:51.170 "zerocopy_threshold": 0, 00:22:51.170 "tls_version": 0, 00:22:51.170 "enable_ktls": false 00:22:51.170 } 00:22:51.170 }, 00:22:51.170 { 00:22:51.170 "method": "sock_impl_set_options", 00:22:51.170 "params": { 00:22:51.170 "impl_name": "posix", 00:22:51.170 "recv_buf_size": 2097152, 00:22:51.170 "send_buf_size": 2097152, 00:22:51.170 "enable_recv_pipe": true, 00:22:51.170 "enable_quickack": false, 00:22:51.170 "enable_placement_id": 0, 00:22:51.170 "enable_zerocopy_send_server": true, 00:22:51.170 "enable_zerocopy_send_client": false, 00:22:51.170 "zerocopy_threshold": 0, 00:22:51.170 "tls_version": 0, 00:22:51.170 "enable_ktls": false 00:22:51.170 } 00:22:51.170 }, 00:22:51.170 { 00:22:51.170 "method": "sock_impl_set_options", 00:22:51.170 "params": { 00:22:51.170 "impl_name": "ssl", 00:22:51.170 "recv_buf_size": 4096, 00:22:51.170 "send_buf_size": 4096, 00:22:51.170 "enable_recv_pipe": true, 00:22:51.170 "enable_quickack": false, 00:22:51.170 "enable_placement_id": 0, 00:22:51.170 "enable_zerocopy_send_server": true, 00:22:51.170 "enable_zerocopy_send_client": false, 00:22:51.170 "zerocopy_threshold": 0, 00:22:51.170 "tls_version": 0, 00:22:51.170 "enable_ktls": false 00:22:51.170 } 00:22:51.170 } 00:22:51.170 ] 00:22:51.170 }, 00:22:51.170 { 00:22:51.170 "subsystem": "vmd", 00:22:51.170 "config": [] 00:22:51.170 }, 00:22:51.171 { 00:22:51.171 "subsystem": "accel", 00:22:51.171 "config": [ 00:22:51.171 { 00:22:51.171 "method": "accel_set_options", 00:22:51.171 "params": { 00:22:51.171 "small_cache_size": 128, 00:22:51.171 "large_cache_size": 16, 00:22:51.171 "task_count": 2048, 00:22:51.171 "sequence_count": 2048, 00:22:51.171 "buf_count": 2048 00:22:51.171 } 00:22:51.171 } 00:22:51.171 ] 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "subsystem": "bdev", 00:22:51.171 "config": [ 00:22:51.171 { 00:22:51.171 "method": "bdev_set_options", 00:22:51.171 "params": { 00:22:51.171 "bdev_io_pool_size": 65535, 00:22:51.171 "bdev_io_cache_size": 256, 00:22:51.171 "bdev_auto_examine": true, 00:22:51.171 "iobuf_small_cache_size": 128, 00:22:51.171 "iobuf_large_cache_size": 16 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "bdev_raid_set_options", 00:22:51.171 "params": { 00:22:51.171 "process_window_size_kb": 1024 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "bdev_iscsi_set_options", 00:22:51.171 "params": { 00:22:51.171 "timeout_sec": 30 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "bdev_nvme_set_options", 00:22:51.171 "params": { 00:22:51.171 "action_on_timeout": "none", 00:22:51.171 "timeout_us": 0, 00:22:51.171 "timeout_admin_us": 0, 00:22:51.171 "keep_alive_timeout_ms": 10000, 00:22:51.171 "arbitration_burst": 0, 00:22:51.171 "low_priority_weight": 0, 00:22:51.171 "medium_priority_weight": 0, 00:22:51.171 "high_priority_weight": 0, 00:22:51.171 "nvme_adminq_poll_period_us": 10000, 00:22:51.171 "nvme_ioq_poll_period_us": 0, 00:22:51.171 "io_queue_requests": 0, 00:22:51.171 "delay_cmd_submit": true, 00:22:51.171 "transport_retry_count": 4, 00:22:51.171 "bdev_retry_count": 3, 00:22:51.171 "transport_ack_timeout": 0, 00:22:51.171 "ctrlr_loss_timeout_sec": 0, 00:22:51.171 "reconnect_delay_sec": 0, 00:22:51.171 "fast_io_fail_timeout_sec": 0, 00:22:51.171 "disable_auto_failback": false, 00:22:51.171 "generate_uuids": false, 00:22:51.171 "transport_tos": 0, 00:22:51.171 "nvme_error_stat": false, 00:22:51.171 "rdma_srq_size": 0, 00:22:51.171 "io_path_stat": false, 00:22:51.171 "allow_accel_sequence": false, 00:22:51.171 "rdma_max_cq_size": 0, 00:22:51.171 "rdma_cm_event_timeout_ms": 0, 00:22:51.171 "dhchap_digests": [ 00:22:51.171 "sha256", 00:22:51.171 "sha384", 00:22:51.171 "sha512" 00:22:51.171 ], 00:22:51.171 "dhchap_dhgroups": [ 00:22:51.171 "null", 00:22:51.171 "ffdhe2048", 00:22:51.171 "ffdhe3072", 00:22:51.171 "ffdhe4096", 00:22:51.171 "ffdhe6144", 00:22:51.171 "ffdhe8192" 00:22:51.171 ] 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "bdev_nvme_set_hotplug", 00:22:51.171 "params": { 00:22:51.171 "period_us": 100000, 00:22:51.171 "enable": false 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "bdev_malloc_create", 00:22:51.171 "params": { 00:22:51.171 "name": "malloc0", 00:22:51.171 "num_blocks": 8192, 00:22:51.171 "block_size": 4096, 00:22:51.171 "physical_block_size": 4096, 00:22:51.171 "uuid": "1b55082a-0402-4d0f-95f5-dcf3b592f236", 00:22:51.171 "optimal_io_boundary": 0 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "bdev_wait_for_examine" 00:22:51.171 } 00:22:51.171 ] 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "subsystem": "nbd", 00:22:51.171 "config": [] 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "subsystem": "scheduler", 00:22:51.171 "config": [ 00:22:51.171 { 00:22:51.171 "method": "framework_set_scheduler", 00:22:51.171 "params": { 00:22:51.171 "name": "static" 00:22:51.171 } 00:22:51.171 } 00:22:51.171 ] 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "subsystem": "nvmf", 00:22:51.171 "config": [ 00:22:51.171 { 00:22:51.171 "method": "nvmf_set_config", 00:22:51.171 "params": { 00:22:51.171 "discovery_filter": "match_any", 00:22:51.171 "admin_cmd_passthru": { 00:22:51.171 "identify_ctrlr": false 00:22:51.171 } 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_set_max_subsystems", 00:22:51.171 "params": { 00:22:51.171 "max_subsystems": 1024 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_set_crdt", 00:22:51.171 "params": { 00:22:51.171 "crdt1": 0, 00:22:51.171 "crdt2": 0, 00:22:51.171 "crdt3": 0 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_create_transport", 00:22:51.171 "params": { 00:22:51.171 "trtype": "TCP", 00:22:51.171 "max_queue_depth": 128, 00:22:51.171 "max_io_qpairs_per_ctrlr": 127, 00:22:51.171 "in_capsule_data_size": 4096, 00:22:51.171 "max_io_size": 131072, 00:22:51.171 "io_unit_size": 131072, 00:22:51.171 "max_aq_depth": 128, 00:22:51.171 "num_shared_buffers": 511, 00:22:51.171 "buf_cache_size": 4294967295, 00:22:51.171 "dif_insert_or_strip": false, 00:22:51.171 "zcopy": false, 00:22:51.171 "c2h_success": false, 00:22:51.171 "sock_priority": 0, 00:22:51.171 "abort_timeout_sec": 1, 00:22:51.171 "ack_timeout": 0, 00:22:51.171 "data_wr_pool_size": 0 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_create_subsystem", 00:22:51.171 "params": { 00:22:51.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.171 "allow_any_host": false, 00:22:51.171 "serial_number": "SPDK00000000000001", 00:22:51.171 "model_number": "SPDK bdev Controller", 00:22:51.171 "max_namespaces": 10, 00:22:51.171 "min_cntlid": 1, 00:22:51.171 "max_cntlid": 65519, 00:22:51.171 "ana_reporting": false 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_subsystem_add_host", 00:22:51.171 "params": { 00:22:51.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.171 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.171 "psk": "/tmp/tmp.xj7XJu2Fhn" 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_subsystem_add_ns", 00:22:51.171 "params": { 00:22:51.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.171 "namespace": { 00:22:51.171 "nsid": 1, 00:22:51.171 "bdev_name": "malloc0", 00:22:51.171 "nguid": "1B55082A04024D0F95F5DCF3B592F236", 00:22:51.171 "uuid": "1b55082a-0402-4d0f-95f5-dcf3b592f236", 00:22:51.171 "no_auto_visible": false 00:22:51.171 } 00:22:51.171 } 00:22:51.171 }, 00:22:51.171 { 00:22:51.171 "method": "nvmf_subsystem_add_listener", 00:22:51.171 "params": { 00:22:51.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.171 "listen_address": { 00:22:51.171 "trtype": "TCP", 00:22:51.171 "adrfam": "IPv4", 00:22:51.171 "traddr": "10.0.0.2", 00:22:51.171 "trsvcid": "4420" 00:22:51.171 }, 00:22:51.171 "secure_channel": true 00:22:51.171 } 00:22:51.171 } 00:22:51.171 ] 00:22:51.171 } 00:22:51.171 ] 00:22:51.171 }' 00:22:51.171 12:19:44 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:51.431 12:19:44 -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:51.431 "subsystems": [ 00:22:51.431 { 00:22:51.431 "subsystem": "keyring", 00:22:51.431 "config": [] 00:22:51.431 }, 00:22:51.431 { 00:22:51.431 "subsystem": "iobuf", 00:22:51.431 "config": [ 00:22:51.431 { 00:22:51.431 "method": "iobuf_set_options", 00:22:51.431 "params": { 00:22:51.431 "small_pool_count": 8192, 00:22:51.431 "large_pool_count": 1024, 00:22:51.431 "small_bufsize": 8192, 00:22:51.431 "large_bufsize": 135168 00:22:51.431 } 00:22:51.431 } 00:22:51.431 ] 00:22:51.431 }, 00:22:51.431 { 00:22:51.431 "subsystem": "sock", 00:22:51.431 "config": [ 00:22:51.431 { 00:22:51.431 "method": "sock_impl_set_options", 00:22:51.431 "params": { 00:22:51.431 "impl_name": "uring", 00:22:51.431 "recv_buf_size": 2097152, 00:22:51.431 "send_buf_size": 2097152, 00:22:51.431 "enable_recv_pipe": true, 00:22:51.431 "enable_quickack": false, 00:22:51.431 "enable_placement_id": 0, 00:22:51.432 "enable_zerocopy_send_server": false, 00:22:51.432 "enable_zerocopy_send_client": false, 00:22:51.432 "zerocopy_threshold": 0, 00:22:51.432 "tls_version": 0, 00:22:51.432 "enable_ktls": false 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "sock_impl_set_options", 00:22:51.432 "params": { 00:22:51.432 "impl_name": "posix", 00:22:51.432 "recv_buf_size": 2097152, 00:22:51.432 "send_buf_size": 2097152, 00:22:51.432 "enable_recv_pipe": true, 00:22:51.432 "enable_quickack": false, 00:22:51.432 "enable_placement_id": 0, 00:22:51.432 "enable_zerocopy_send_server": true, 00:22:51.432 "enable_zerocopy_send_client": false, 00:22:51.432 "zerocopy_threshold": 0, 00:22:51.432 "tls_version": 0, 00:22:51.432 "enable_ktls": false 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "sock_impl_set_options", 00:22:51.432 "params": { 00:22:51.432 "impl_name": "ssl", 00:22:51.432 "recv_buf_size": 4096, 00:22:51.432 "send_buf_size": 4096, 00:22:51.432 "enable_recv_pipe": true, 00:22:51.432 "enable_quickack": false, 00:22:51.432 "enable_placement_id": 0, 00:22:51.432 "enable_zerocopy_send_server": true, 00:22:51.432 "enable_zerocopy_send_client": false, 00:22:51.432 "zerocopy_threshold": 0, 00:22:51.432 "tls_version": 0, 00:22:51.432 "enable_ktls": false 00:22:51.432 } 00:22:51.432 } 00:22:51.432 ] 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "subsystem": "vmd", 00:22:51.432 "config": [] 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "subsystem": "accel", 00:22:51.432 "config": [ 00:22:51.432 { 00:22:51.432 "method": "accel_set_options", 00:22:51.432 "params": { 00:22:51.432 "small_cache_size": 128, 00:22:51.432 "large_cache_size": 16, 00:22:51.432 "task_count": 2048, 00:22:51.432 "sequence_count": 2048, 00:22:51.432 "buf_count": 2048 00:22:51.432 } 00:22:51.432 } 00:22:51.432 ] 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "subsystem": "bdev", 00:22:51.432 "config": [ 00:22:51.432 { 00:22:51.432 "method": "bdev_set_options", 00:22:51.432 "params": { 00:22:51.432 "bdev_io_pool_size": 65535, 00:22:51.432 "bdev_io_cache_size": 256, 00:22:51.432 "bdev_auto_examine": true, 00:22:51.432 "iobuf_small_cache_size": 128, 00:22:51.432 "iobuf_large_cache_size": 16 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "bdev_raid_set_options", 00:22:51.432 "params": { 00:22:51.432 "process_window_size_kb": 1024 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "bdev_iscsi_set_options", 00:22:51.432 "params": { 00:22:51.432 "timeout_sec": 30 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "bdev_nvme_set_options", 00:22:51.432 "params": { 00:22:51.432 "action_on_timeout": "none", 00:22:51.432 "timeout_us": 0, 00:22:51.432 "timeout_admin_us": 0, 00:22:51.432 "keep_alive_timeout_ms": 10000, 00:22:51.432 "arbitration_burst": 0, 00:22:51.432 "low_priority_weight": 0, 00:22:51.432 "medium_priority_weight": 0, 00:22:51.432 "high_priority_weight": 0, 00:22:51.432 "nvme_adminq_poll_period_us": 10000, 00:22:51.432 "nvme_ioq_poll_period_us": 0, 00:22:51.432 "io_queue_requests": 512, 00:22:51.432 "delay_cmd_submit": true, 00:22:51.432 "transport_retry_count": 4, 00:22:51.432 "bdev_retry_count": 3, 00:22:51.432 "transport_ack_timeout": 0, 00:22:51.432 "ctrlr_loss_timeout_sec": 0, 00:22:51.432 "reconnect_delay_sec": 0, 00:22:51.432 "fast_io_fail_timeout_sec": 0, 00:22:51.432 "disable_auto_failback": false, 00:22:51.432 "generate_uuids": false, 00:22:51.432 "transport_tos": 0, 00:22:51.432 "nvme_error_stat": false, 00:22:51.432 "rdma_srq_size": 0, 00:22:51.432 "io_path_stat": false, 00:22:51.432 "allow_accel_sequence": false, 00:22:51.432 "rdma_max_cq_size": 0, 00:22:51.432 "rdma_cm_event_timeout_ms": 0, 00:22:51.432 "dhchap_digests": [ 00:22:51.432 "sha256", 00:22:51.432 "sha384", 00:22:51.432 "sha512" 00:22:51.432 ], 00:22:51.432 "dhchap_dhgroups": [ 00:22:51.432 "null", 00:22:51.432 "ffdhe2048", 00:22:51.432 "ffdhe3072", 00:22:51.432 "ffdhe4096", 00:22:51.432 "ffdhe6144", 00:22:51.432 "ffdhe8192" 00:22:51.432 ] 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "bdev_nvme_attach_controller", 00:22:51.432 "params": { 00:22:51.432 "name": "TLSTEST", 00:22:51.432 "trtype": "TCP", 00:22:51.432 "adrfam": "IPv4", 00:22:51.432 "traddr": "10.0.0.2", 00:22:51.432 "trsvcid": "4420", 00:22:51.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.432 "prchk_reftag": false, 00:22:51.432 "prchk_guard": false, 00:22:51.432 "ctrlr_loss_timeout_sec": 0, 00:22:51.432 "reconnect_delay_sec": 0, 00:22:51.432 "fast_io_fail_timeout_sec": 0, 00:22:51.432 "psk": "/tmp/tmp.xj7XJu2Fhn", 00:22:51.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.432 "hdgst": false, 00:22:51.432 "ddgst": false 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "bdev_nvme_set_hotplug", 00:22:51.432 "params": { 00:22:51.432 "period_us": 100000, 00:22:51.432 "enable": false 00:22:51.432 } 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "method": "bdev_wait_for_examine" 00:22:51.432 } 00:22:51.432 ] 00:22:51.432 }, 00:22:51.432 { 00:22:51.432 "subsystem": "nbd", 00:22:51.432 "config": [] 00:22:51.432 } 00:22:51.432 ] 00:22:51.432 }' 00:22:51.432 12:19:44 -- target/tls.sh@199 -- # killprocess 70618 00:22:51.432 12:19:44 -- common/autotest_common.sh@936 -- # '[' -z 70618 ']' 00:22:51.432 12:19:44 -- common/autotest_common.sh@940 -- # kill -0 70618 00:22:51.432 12:19:44 -- common/autotest_common.sh@941 -- # uname 00:22:51.432 12:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.432 12:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70618 00:22:51.432 killing process with pid 70618 00:22:51.432 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.432 00:22:51.432 Latency(us) 00:22:51.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.432 =================================================================================================================== 00:22:51.432 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.432 12:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:51.432 12:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:51.432 12:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70618' 00:22:51.432 12:19:44 -- common/autotest_common.sh@955 -- # kill 70618 00:22:51.432 [2024-04-26 12:19:44.807060] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.432 12:19:44 -- common/autotest_common.sh@960 -- # wait 70618 00:22:51.692 12:19:45 -- target/tls.sh@200 -- # killprocess 70563 00:22:51.692 12:19:45 -- common/autotest_common.sh@936 -- # '[' -z 70563 ']' 00:22:51.692 12:19:45 -- common/autotest_common.sh@940 -- # kill -0 70563 00:22:51.692 12:19:45 -- common/autotest_common.sh@941 -- # uname 00:22:51.692 12:19:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.692 12:19:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70563 00:22:51.692 killing process with pid 70563 00:22:51.692 12:19:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:51.692 12:19:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:51.692 12:19:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70563' 00:22:51.692 12:19:45 -- common/autotest_common.sh@955 -- # kill 70563 00:22:51.692 [2024-04-26 12:19:45.102995] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:51.692 12:19:45 -- common/autotest_common.sh@960 -- # wait 70563 00:22:51.951 12:19:45 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:51.951 12:19:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:51.951 12:19:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:51.951 12:19:45 -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 12:19:45 -- target/tls.sh@203 -- # echo '{ 00:22:51.951 "subsystems": [ 00:22:51.951 { 00:22:51.951 "subsystem": "keyring", 00:22:51.951 "config": [] 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "subsystem": "iobuf", 00:22:51.951 "config": [ 00:22:51.951 { 00:22:51.951 "method": "iobuf_set_options", 00:22:51.951 "params": { 00:22:51.951 "small_pool_count": 8192, 00:22:51.951 "large_pool_count": 1024, 00:22:51.951 "small_bufsize": 8192, 00:22:51.951 "large_bufsize": 135168 00:22:51.951 } 00:22:51.951 } 00:22:51.951 ] 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "subsystem": "sock", 00:22:51.951 "config": [ 00:22:51.951 { 00:22:51.951 "method": "sock_impl_set_options", 00:22:51.951 "params": { 00:22:51.951 "impl_name": "uring", 00:22:51.951 "recv_buf_size": 2097152, 00:22:51.951 "send_buf_size": 2097152, 00:22:51.951 "enable_recv_pipe": true, 00:22:51.951 "enable_quickack": false, 00:22:51.951 "enable_placement_id": 0, 00:22:51.951 "enable_zerocopy_send_server": false, 00:22:51.951 "enable_zerocopy_send_client": false, 00:22:51.951 "zerocopy_threshold": 0, 00:22:51.951 "tls_version": 0, 00:22:51.951 "enable_ktls": false 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "method": "sock_impl_set_options", 00:22:51.951 "params": { 00:22:51.951 "impl_name": "posix", 00:22:51.951 "recv_buf_size": 2097152, 00:22:51.951 "send_buf_size": 2097152, 00:22:51.951 "enable_recv_pipe": true, 00:22:51.951 "enable_quickack": false, 00:22:51.951 "enable_placement_id": 0, 00:22:51.951 "enable_zerocopy_send_server": true, 00:22:51.951 "enable_zerocopy_send_client": false, 00:22:51.951 "zerocopy_threshold": 0, 00:22:51.951 "tls_version": 0, 00:22:51.951 "enable_ktls": false 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "method": "sock_impl_set_options", 00:22:51.951 "params": { 00:22:51.951 "impl_name": "ssl", 00:22:51.951 "recv_buf_size": 4096, 00:22:51.951 "send_buf_size": 4096, 00:22:51.951 "enable_recv_pipe": true, 00:22:51.951 "enable_quickack": false, 00:22:51.951 "enable_placement_id": 0, 00:22:51.951 "enable_zerocopy_send_server": true, 00:22:51.951 "enable_zerocopy_send_client": false, 00:22:51.951 "zerocopy_threshold": 0, 00:22:51.951 "tls_version": 0, 00:22:51.951 "enable_ktls": false 00:22:51.951 } 00:22:51.951 } 00:22:51.951 ] 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "subsystem": "vmd", 00:22:51.951 "config": [] 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "subsystem": "accel", 00:22:51.951 "config": [ 00:22:51.951 { 00:22:51.951 "method": "accel_set_options", 00:22:51.951 "params": { 00:22:51.951 "small_cache_size": 128, 00:22:51.951 "large_cache_size": 16, 00:22:51.951 "task_count": 2048, 00:22:51.951 "sequence_count": 2048, 00:22:51.951 "buf_count": 2048 00:22:51.951 } 00:22:51.951 } 00:22:51.951 ] 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "subsystem": "bdev", 00:22:51.951 "config": [ 00:22:51.951 { 00:22:51.951 "method": "bdev_set_options", 00:22:51.951 "params": { 00:22:51.951 "bdev_io_pool_size": 65535, 00:22:51.951 "bdev_io_cache_size": 256, 00:22:51.951 "bdev_auto_examine": true, 00:22:51.951 "iobuf_small_cache_size": 128, 00:22:51.951 "iobuf_large_cache_size": 16 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "method": "bdev_raid_set_options", 00:22:51.951 "params": { 00:22:51.951 "process_window_size_kb": 1024 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "method": "bdev_iscsi_set_options", 00:22:51.951 "params": { 00:22:51.951 "timeout_sec": 30 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "method": "bdev_nvme_set_options", 00:22:51.951 "params": { 00:22:51.951 "action_on_timeout": "none", 00:22:51.951 "timeout_us": 0, 00:22:51.951 "timeout_admin_us": 0, 00:22:51.951 "keep_alive_timeout_ms": 10000, 00:22:51.951 "arbitration_burst": 0, 00:22:51.951 "low_priority_weight": 0, 00:22:51.951 "medium_priority_weight": 0, 00:22:51.951 "high_priority_weight": 0, 00:22:51.951 "nvme_adminq_poll_period_us": 10000, 00:22:51.951 "nvme_ioq_poll_period_us": 0, 00:22:51.951 "io_queue_requests": 0, 00:22:51.951 "delay_cmd_submit": true, 00:22:51.951 "transport_retry_count": 4, 00:22:51.951 "bdev_retry_count": 3, 00:22:51.951 "transport_ack_timeout": 0, 00:22:51.951 "ctrlr_loss_timeout_sec": 0, 00:22:51.951 "reconnect_delay_sec": 0, 00:22:51.951 "fast_io_fail_timeout_sec": 0, 00:22:51.951 "disable_auto_failback": false, 00:22:51.951 "generate_uuids": false, 00:22:51.951 "transport_tos": 0, 00:22:51.951 "nvme_error_stat": false, 00:22:51.951 "rdma_srq_size": 0, 00:22:51.951 "io_path_stat": false, 00:22:51.951 "allow_accel_sequence": false, 00:22:51.951 "rdma_max_cq_size": 0, 00:22:51.951 "rdma_cm_event_timeout_ms": 0, 00:22:51.951 "dhchap_digests": [ 00:22:51.951 "sha256", 00:22:51.951 "sha384", 00:22:51.951 "sha512" 00:22:51.951 ], 00:22:51.951 "dhchap_dhgroups": [ 00:22:51.951 "null", 00:22:51.951 "ffdhe2048", 00:22:51.951 "ffdhe3072", 00:22:51.951 "ffdhe4096", 00:22:51.951 "ffdhe6144", 00:22:51.951 "ffdhe8192" 00:22:51.951 ] 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.951 "method": "bdev_nvme_set_hotplug", 00:22:51.951 "params": { 00:22:51.951 "period_us": 100000, 00:22:51.951 "enable": false 00:22:51.951 } 00:22:51.951 }, 00:22:51.951 { 00:22:51.952 "method": "bdev_malloc_create", 00:22:51.952 "params": { 00:22:51.952 "name": "malloc0", 00:22:51.952 "num_blocks": 8192, 00:22:51.952 "block_size": 4096, 00:22:51.952 "physical_block_size": 4096, 00:22:51.952 "uuid": "1b55082a-0402-4d0f-95f5-dcf3b592f236", 00:22:51.952 "optimal_io_boundary": 0 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "bdev_wait_for_examine" 00:22:51.952 } 00:22:51.952 ] 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "subsystem": "nbd", 00:22:51.952 "config": [] 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "subsystem": "scheduler", 00:22:51.952 "config": [ 00:22:51.952 { 00:22:51.952 "method": "framework_set_scheduler", 00:22:51.952 "params": { 00:22:51.952 "name": "static" 00:22:51.952 } 00:22:51.952 } 00:22:51.952 ] 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "subsystem": "nvmf", 00:22:51.952 "config": [ 00:22:51.952 { 00:22:51.952 "method": "nvmf_set_config", 00:22:51.952 "params": { 00:22:51.952 "discovery_filter": "match_any", 00:22:51.952 "admin_cmd_passthru": { 00:22:51.952 "identify_ctrlr": false 00:22:51.952 } 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_set_max_subsystems", 00:22:51.952 "params": { 00:22:51.952 "max_subsystems": 1024 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_set_crdt", 00:22:51.952 "params": { 00:22:51.952 "crdt1": 0, 00:22:51.952 "crdt2": 0, 00:22:51.952 "crdt3": 0 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_create_transport", 00:22:51.952 "params": { 00:22:51.952 "trtype": "TCP", 00:22:51.952 "max_queue_depth": 128, 00:22:51.952 "max_io_qpairs_per_ctrlr": 127, 00:22:51.952 "in_capsule_data_size": 4096, 00:22:51.952 "max_io_size": 131072, 00:22:51.952 "io_unit_size": 131072, 00:22:51.952 "max_aq_depth": 128, 00:22:51.952 "num_shared_buffers": 511, 00:22:51.952 "buf_cache_size": 4294967295, 00:22:51.952 "dif_insert_or_strip": false, 00:22:51.952 "zcopy": false, 00:22:51.952 "c2h_success": false, 00:22:51.952 "sock_priority": 0, 00:22:51.952 "abort_timeout_sec": 1, 00:22:51.952 "ack_timeout": 0, 00:22:51.952 "data_wr_pool_size": 0 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_create_subsystem", 00:22:51.952 "params": { 00:22:51.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.952 "allow_any_host": false, 00:22:51.952 "serial_number": "SPDK00000000000001", 00:22:51.952 "model_number": "SPDK bdev Controller", 00:22:51.952 "max_namespaces": 10, 00:22:51.952 "min_cntlid": 1, 00:22:51.952 "max_cntlid": 65519, 00:22:51.952 "ana_reporting": false 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_subsystem_add_host", 00:22:51.952 "params": { 00:22:51.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.952 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.952 "psk": "/tmp/tmp.xj7XJu2Fhn" 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_subsystem_add_ns", 00:22:51.952 "params": { 00:22:51.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.952 "namespace": { 00:22:51.952 "nsid": 1, 00:22:51.952 "bdev_name": "malloc0", 00:22:51.952 "nguid": "1B55082A04024D0F95F5DCF3B592F236", 00:22:51.952 "uuid": "1b55082a-0402-4d0f-95f5-dcf3b592f236", 00:22:51.952 "no_auto_visible": false 00:22:51.952 } 00:22:51.952 } 00:22:51.952 }, 00:22:51.952 { 00:22:51.952 "method": "nvmf_subsystem_add_listener", 00:22:51.952 "params": { 00:22:51.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.952 "listen_address": { 00:22:51.952 "trtype": "TCP", 00:22:51.952 "adrfam": "IPv4", 00:22:51.952 "traddr": "10.0.0.2", 00:22:51.952 "trsvcid": "4420" 00:22:51.952 }, 00:22:51.952 "secure_channel": true 00:22:51.952 } 00:22:51.952 } 00:22:51.952 ] 00:22:51.952 } 00:22:51.952 ] 00:22:51.952 }' 00:22:51.952 12:19:45 -- nvmf/common.sh@470 -- # nvmfpid=70671 00:22:51.952 12:19:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:51.952 12:19:45 -- nvmf/common.sh@471 -- # waitforlisten 70671 00:22:51.952 12:19:45 -- common/autotest_common.sh@817 -- # '[' -z 70671 ']' 00:22:51.952 12:19:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.952 12:19:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:51.952 12:19:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.952 12:19:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:51.952 12:19:45 -- common/autotest_common.sh@10 -- # set +x 00:22:52.211 [2024-04-26 12:19:45.457000] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:52.211 [2024-04-26 12:19:45.457122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.211 [2024-04-26 12:19:45.599701] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.470 [2024-04-26 12:19:45.752932] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.470 [2024-04-26 12:19:45.753001] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.470 [2024-04-26 12:19:45.753016] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.470 [2024-04-26 12:19:45.753027] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.470 [2024-04-26 12:19:45.753036] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.470 [2024-04-26 12:19:45.753165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.729 [2024-04-26 12:19:45.992866] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.729 [2024-04-26 12:19:46.008778] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.729 [2024-04-26 12:19:46.024771] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.729 [2024-04-26 12:19:46.024992] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.297 12:19:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:53.297 12:19:46 -- common/autotest_common.sh@850 -- # return 0 00:22:53.297 12:19:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:53.297 12:19:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:53.297 12:19:46 -- common/autotest_common.sh@10 -- # set +x 00:22:53.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.297 12:19:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.297 12:19:46 -- target/tls.sh@207 -- # bdevperf_pid=70700 00:22:53.297 12:19:46 -- target/tls.sh@208 -- # waitforlisten 70700 /var/tmp/bdevperf.sock 00:22:53.297 12:19:46 -- common/autotest_common.sh@817 -- # '[' -z 70700 ']' 00:22:53.297 12:19:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.297 12:19:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:53.297 12:19:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.297 12:19:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:53.297 12:19:46 -- common/autotest_common.sh@10 -- # set +x 00:22:53.297 12:19:46 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:53.297 12:19:46 -- target/tls.sh@204 -- # echo '{ 00:22:53.297 "subsystems": [ 00:22:53.297 { 00:22:53.297 "subsystem": "keyring", 00:22:53.297 "config": [] 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "subsystem": "iobuf", 00:22:53.297 "config": [ 00:22:53.297 { 00:22:53.297 "method": "iobuf_set_options", 00:22:53.297 "params": { 00:22:53.297 "small_pool_count": 8192, 00:22:53.297 "large_pool_count": 1024, 00:22:53.297 "small_bufsize": 8192, 00:22:53.297 "large_bufsize": 135168 00:22:53.297 } 00:22:53.297 } 00:22:53.297 ] 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "subsystem": "sock", 00:22:53.297 "config": [ 00:22:53.297 { 00:22:53.297 "method": "sock_impl_set_options", 00:22:53.297 "params": { 00:22:53.297 "impl_name": "uring", 00:22:53.297 "recv_buf_size": 2097152, 00:22:53.297 "send_buf_size": 2097152, 00:22:53.297 "enable_recv_pipe": true, 00:22:53.297 "enable_quickack": false, 00:22:53.297 "enable_placement_id": 0, 00:22:53.297 "enable_zerocopy_send_server": false, 00:22:53.297 "enable_zerocopy_send_client": false, 00:22:53.297 "zerocopy_threshold": 0, 00:22:53.297 "tls_version": 0, 00:22:53.297 "enable_ktls": false 00:22:53.297 } 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "method": "sock_impl_set_options", 00:22:53.297 "params": { 00:22:53.297 "impl_name": "posix", 00:22:53.297 "recv_buf_size": 2097152, 00:22:53.297 "send_buf_size": 2097152, 00:22:53.297 "enable_recv_pipe": true, 00:22:53.297 "enable_quickack": false, 00:22:53.297 "enable_placement_id": 0, 00:22:53.297 "enable_zerocopy_send_server": true, 00:22:53.297 "enable_zerocopy_send_client": false, 00:22:53.297 "zerocopy_threshold": 0, 00:22:53.297 "tls_version": 0, 00:22:53.297 "enable_ktls": false 00:22:53.297 } 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "method": "sock_impl_set_options", 00:22:53.297 "params": { 00:22:53.297 "impl_name": "ssl", 00:22:53.297 "recv_buf_size": 4096, 00:22:53.297 "send_buf_size": 4096, 00:22:53.297 "enable_recv_pipe": true, 00:22:53.297 "enable_quickack": false, 00:22:53.297 "enable_placement_id": 0, 00:22:53.297 "enable_zerocopy_send_server": true, 00:22:53.297 "enable_zerocopy_send_client": false, 00:22:53.297 "zerocopy_threshold": 0, 00:22:53.297 "tls_version": 0, 00:22:53.297 "enable_ktls": false 00:22:53.297 } 00:22:53.297 } 00:22:53.297 ] 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "subsystem": "vmd", 00:22:53.297 "config": [] 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "subsystem": "accel", 00:22:53.297 "config": [ 00:22:53.297 { 00:22:53.297 "method": "accel_set_options", 00:22:53.297 "params": { 00:22:53.297 "small_cache_size": 128, 00:22:53.297 "large_cache_size": 16, 00:22:53.297 "task_count": 2048, 00:22:53.297 "sequence_count": 2048, 00:22:53.297 "buf_count": 2048 00:22:53.297 } 00:22:53.297 } 00:22:53.297 ] 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "subsystem": "bdev", 00:22:53.297 "config": [ 00:22:53.297 { 00:22:53.297 "method": "bdev_set_options", 00:22:53.297 "params": { 00:22:53.297 "bdev_io_pool_size": 65535, 00:22:53.297 "bdev_io_cache_size": 256, 00:22:53.297 "bdev_auto_examine": true, 00:22:53.297 "iobuf_small_cache_size": 128, 00:22:53.297 "iobuf_large_cache_size": 16 00:22:53.297 } 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "method": "bdev_raid_set_options", 00:22:53.297 "params": { 00:22:53.297 "process_window_size_kb": 1024 00:22:53.297 } 00:22:53.297 }, 00:22:53.297 { 00:22:53.297 "method": "bdev_iscsi_set_options", 00:22:53.298 "params": { 00:22:53.298 "timeout_sec": 30 00:22:53.298 } 00:22:53.298 }, 00:22:53.298 { 00:22:53.298 "method": "bdev_nvme_set_options", 00:22:53.298 "params": { 00:22:53.298 "action_on_timeout": "none", 00:22:53.298 "timeout_us": 0, 00:22:53.298 "timeout_admin_us": 0, 00:22:53.298 "keep_alive_timeout_ms": 10000, 00:22:53.298 "arbitration_burst": 0, 00:22:53.298 "low_priority_weight": 0, 00:22:53.298 "medium_priority_weight": 0, 00:22:53.298 "high_priority_weight": 0, 00:22:53.298 "nvme_adminq_poll_period_us": 10000, 00:22:53.298 "nvme_ioq_poll_period_us": 0, 00:22:53.298 "io_queue_requests": 512, 00:22:53.298 "delay_cmd_submit": true, 00:22:53.298 "transport_retry_count": 4, 00:22:53.298 "bdev_retry_count": 3, 00:22:53.298 "transport_ack_timeout": 0, 00:22:53.298 "ctrlr_loss_timeout_sec": 0, 00:22:53.298 "reconnect_delay_sec": 0, 00:22:53.298 "fast_io_fail_timeout_sec": 0, 00:22:53.298 "disable_auto_failback": false, 00:22:53.298 "generate_uuids": false, 00:22:53.298 "transport_tos": 0, 00:22:53.298 "nvme_error_stat": false, 00:22:53.298 "rdma_srq_size": 0, 00:22:53.298 "io_path_stat": false, 00:22:53.298 "allow_accel_sequence": false, 00:22:53.298 "rdma_max_cq_size": 0, 00:22:53.298 "rdma_cm_event_timeout_ms": 0, 00:22:53.298 "dhchap_digests": [ 00:22:53.298 "sha256", 00:22:53.298 "sha384", 00:22:53.298 "sha512" 00:22:53.298 ], 00:22:53.298 "dhchap_dhgroups": [ 00:22:53.298 "null", 00:22:53.298 "ffdhe2048", 00:22:53.298 "ffdhe3072", 00:22:53.298 "ffdhe4096", 00:22:53.298 "ffdhe6144", 00:22:53.298 "ffdhe8192" 00:22:53.298 ] 00:22:53.298 } 00:22:53.298 }, 00:22:53.298 { 00:22:53.298 "method": "bdev_nvme_attach_controller", 00:22:53.298 "params": { 00:22:53.298 "name": "TLSTEST", 00:22:53.298 "trtype": "TCP", 00:22:53.298 "adrfam": "IPv4", 00:22:53.298 "traddr": "10.0.0.2", 00:22:53.298 "trsvcid": "4420", 00:22:53.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.298 "prchk_reftag": false, 00:22:53.298 "prchk_guard": false, 00:22:53.298 "ctrlr_loss_timeout_sec": 0, 00:22:53.298 "reconnect_delay_sec": 0, 00:22:53.298 "fast_io_fail_timeout_sec": 0, 00:22:53.298 "psk": "/tmp/tmp.xj7XJu2Fhn", 00:22:53.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.298 "hdgst": false, 00:22:53.298 "ddgst": false 00:22:53.298 } 00:22:53.298 }, 00:22:53.298 { 00:22:53.298 "method": "bdev_nvme_set_hotplug", 00:22:53.298 "params": { 00:22:53.298 "period_us": 100000, 00:22:53.298 "enable": false 00:22:53.298 } 00:22:53.298 }, 00:22:53.298 { 00:22:53.298 "method": "bdev_wait_for_examine" 00:22:53.298 } 00:22:53.298 ] 00:22:53.298 }, 00:22:53.298 { 00:22:53.298 "subsystem": "nbd", 00:22:53.298 "config": [] 00:22:53.298 } 00:22:53.298 ] 00:22:53.298 }' 00:22:53.298 [2024-04-26 12:19:46.562659] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:22:53.298 [2024-04-26 12:19:46.562768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70700 ] 00:22:53.298 [2024-04-26 12:19:46.702950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.557 [2024-04-26 12:19:46.872675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.815 [2024-04-26 12:19:47.085187] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.815 [2024-04-26 12:19:47.085990] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.381 12:19:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:54.381 12:19:47 -- common/autotest_common.sh@850 -- # return 0 00:22:54.381 12:19:47 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:54.381 Running I/O for 10 seconds... 00:23:04.507 00:23:04.507 Latency(us) 00:23:04.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.507 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:04.507 Verification LBA range: start 0x0 length 0x2000 00:23:04.507 TLSTESTn1 : 10.02 3935.40 15.37 0.00 0.00 32462.12 7208.96 33602.09 00:23:04.507 =================================================================================================================== 00:23:04.507 Total : 3935.40 15.37 0.00 0.00 32462.12 7208.96 33602.09 00:23:04.507 0 00:23:04.507 12:19:57 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.507 12:19:57 -- target/tls.sh@214 -- # killprocess 70700 00:23:04.507 12:19:57 -- common/autotest_common.sh@936 -- # '[' -z 70700 ']' 00:23:04.507 12:19:57 -- common/autotest_common.sh@940 -- # kill -0 70700 00:23:04.507 12:19:57 -- common/autotest_common.sh@941 -- # uname 00:23:04.507 12:19:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.507 12:19:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70700 00:23:04.507 killing process with pid 70700 00:23:04.507 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.507 00:23:04.507 Latency(us) 00:23:04.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.507 =================================================================================================================== 00:23:04.507 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.507 12:19:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:04.507 12:19:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:04.507 12:19:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70700' 00:23:04.507 12:19:57 -- common/autotest_common.sh@955 -- # kill 70700 00:23:04.507 [2024-04-26 12:19:57.797485] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:04.507 12:19:57 -- common/autotest_common.sh@960 -- # wait 70700 00:23:04.765 12:19:58 -- target/tls.sh@215 -- # killprocess 70671 00:23:04.765 12:19:58 -- common/autotest_common.sh@936 -- # '[' -z 70671 ']' 00:23:04.765 12:19:58 -- common/autotest_common.sh@940 -- # kill -0 70671 00:23:04.765 12:19:58 -- common/autotest_common.sh@941 -- # uname 00:23:04.765 12:19:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.765 12:19:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70671 00:23:04.765 killing process with pid 70671 00:23:04.765 12:19:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:04.765 12:19:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:04.765 12:19:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70671' 00:23:04.765 12:19:58 -- common/autotest_common.sh@955 -- # kill 70671 00:23:04.765 [2024-04-26 12:19:58.083238] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:04.765 12:19:58 -- common/autotest_common.sh@960 -- # wait 70671 00:23:05.024 12:19:58 -- target/tls.sh@218 -- # nvmfappstart 00:23:05.024 12:19:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:05.024 12:19:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:05.024 12:19:58 -- common/autotest_common.sh@10 -- # set +x 00:23:05.024 12:19:58 -- nvmf/common.sh@470 -- # nvmfpid=70837 00:23:05.024 12:19:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:05.024 12:19:58 -- nvmf/common.sh@471 -- # waitforlisten 70837 00:23:05.024 12:19:58 -- common/autotest_common.sh@817 -- # '[' -z 70837 ']' 00:23:05.024 12:19:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.024 12:19:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.024 12:19:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.024 12:19:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.024 12:19:58 -- common/autotest_common.sh@10 -- # set +x 00:23:05.024 [2024-04-26 12:19:58.400343] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:05.024 [2024-04-26 12:19:58.400424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.281 [2024-04-26 12:19:58.539042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.281 [2024-04-26 12:19:58.661525] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.281 [2024-04-26 12:19:58.661594] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.281 [2024-04-26 12:19:58.661609] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.281 [2024-04-26 12:19:58.661620] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.281 [2024-04-26 12:19:58.661629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.281 [2024-04-26 12:19:58.661673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.213 12:19:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.213 12:19:59 -- common/autotest_common.sh@850 -- # return 0 00:23:06.213 12:19:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.213 12:19:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.213 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.213 12:19:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.213 12:19:59 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.xj7XJu2Fhn 00:23:06.213 12:19:59 -- target/tls.sh@49 -- # local key=/tmp/tmp.xj7XJu2Fhn 00:23:06.213 12:19:59 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.213 [2024-04-26 12:19:59.613992] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.213 12:19:59 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.470 12:19:59 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:06.727 [2024-04-26 12:20:00.070074] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.727 [2024-04-26 12:20:00.070352] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.727 12:20:00 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.985 malloc0 00:23:06.985 12:20:00 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.243 12:20:00 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xj7XJu2Fhn 00:23:07.501 [2024-04-26 12:20:00.773527] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.501 12:20:00 -- target/tls.sh@222 -- # bdevperf_pid=70892 00:23:07.501 12:20:00 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:07.501 12:20:00 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.501 12:20:00 -- target/tls.sh@225 -- # waitforlisten 70892 /var/tmp/bdevperf.sock 00:23:07.501 12:20:00 -- common/autotest_common.sh@817 -- # '[' -z 70892 ']' 00:23:07.501 12:20:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.501 12:20:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.501 12:20:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.501 12:20:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.501 12:20:00 -- common/autotest_common.sh@10 -- # set +x 00:23:07.501 [2024-04-26 12:20:00.838505] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:07.501 [2024-04-26 12:20:00.838794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70892 ] 00:23:07.757 [2024-04-26 12:20:00.973772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.757 [2024-04-26 12:20:01.100075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.335 12:20:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.335 12:20:01 -- common/autotest_common.sh@850 -- # return 0 00:23:08.335 12:20:01 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xj7XJu2Fhn 00:23:08.603 12:20:02 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:08.860 [2024-04-26 12:20:02.257985] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.860 nvme0n1 00:23:09.118 12:20:02 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.118 Running I/O for 1 seconds... 00:23:10.049 00:23:10.049 Latency(us) 00:23:10.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.049 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:10.049 Verification LBA range: start 0x0 length 0x2000 00:23:10.049 nvme0n1 : 1.02 4035.52 15.76 0.00 0.00 31391.12 6851.49 27525.12 00:23:10.049 =================================================================================================================== 00:23:10.049 Total : 4035.52 15.76 0.00 0.00 31391.12 6851.49 27525.12 00:23:10.049 0 00:23:10.049 12:20:03 -- target/tls.sh@234 -- # killprocess 70892 00:23:10.049 12:20:03 -- common/autotest_common.sh@936 -- # '[' -z 70892 ']' 00:23:10.049 12:20:03 -- common/autotest_common.sh@940 -- # kill -0 70892 00:23:10.049 12:20:03 -- common/autotest_common.sh@941 -- # uname 00:23:10.049 12:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:10.049 12:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70892 00:23:10.049 killing process with pid 70892 00:23:10.049 12:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:10.049 12:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:10.049 12:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70892' 00:23:10.049 12:20:03 -- common/autotest_common.sh@955 -- # kill 70892 00:23:10.049 Received shutdown signal, test time was about 1.000000 seconds 00:23:10.049 00:23:10.049 Latency(us) 00:23:10.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.049 =================================================================================================================== 00:23:10.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.049 12:20:03 -- common/autotest_common.sh@960 -- # wait 70892 00:23:10.306 12:20:03 -- target/tls.sh@235 -- # killprocess 70837 00:23:10.306 12:20:03 -- common/autotest_common.sh@936 -- # '[' -z 70837 ']' 00:23:10.306 12:20:03 -- common/autotest_common.sh@940 -- # kill -0 70837 00:23:10.306 12:20:03 -- common/autotest_common.sh@941 -- # uname 00:23:10.306 12:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:10.306 12:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70837 00:23:10.564 killing process with pid 70837 00:23:10.564 12:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:10.564 12:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:10.564 12:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70837' 00:23:10.564 12:20:03 -- common/autotest_common.sh@955 -- # kill 70837 00:23:10.564 [2024-04-26 12:20:03.782074] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:10.564 12:20:03 -- common/autotest_common.sh@960 -- # wait 70837 00:23:10.834 12:20:04 -- target/tls.sh@238 -- # nvmfappstart 00:23:10.834 12:20:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:10.834 12:20:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:10.834 12:20:04 -- common/autotest_common.sh@10 -- # set +x 00:23:10.834 12:20:04 -- nvmf/common.sh@470 -- # nvmfpid=70943 00:23:10.834 12:20:04 -- nvmf/common.sh@471 -- # waitforlisten 70943 00:23:10.834 12:20:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:10.834 12:20:04 -- common/autotest_common.sh@817 -- # '[' -z 70943 ']' 00:23:10.834 12:20:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.834 12:20:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.834 12:20:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.834 12:20:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.834 12:20:04 -- common/autotest_common.sh@10 -- # set +x 00:23:10.834 [2024-04-26 12:20:04.112955] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:10.834 [2024-04-26 12:20:04.113069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.834 [2024-04-26 12:20:04.247556] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.091 [2024-04-26 12:20:04.357925] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.091 [2024-04-26 12:20:04.358005] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.091 [2024-04-26 12:20:04.358047] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.091 [2024-04-26 12:20:04.358056] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.091 [2024-04-26 12:20:04.358064] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.091 [2024-04-26 12:20:04.358109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.655 12:20:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.655 12:20:05 -- common/autotest_common.sh@850 -- # return 0 00:23:11.655 12:20:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:11.655 12:20:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:11.655 12:20:05 -- common/autotest_common.sh@10 -- # set +x 00:23:11.655 12:20:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.655 12:20:05 -- target/tls.sh@239 -- # rpc_cmd 00:23:11.655 12:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.655 12:20:05 -- common/autotest_common.sh@10 -- # set +x 00:23:11.655 [2024-04-26 12:20:05.113942] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.912 malloc0 00:23:11.912 [2024-04-26 12:20:05.145444] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.912 [2024-04-26 12:20:05.145657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.912 12:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.912 12:20:05 -- target/tls.sh@252 -- # bdevperf_pid=70975 00:23:11.912 12:20:05 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:11.912 12:20:05 -- target/tls.sh@254 -- # waitforlisten 70975 /var/tmp/bdevperf.sock 00:23:11.912 12:20:05 -- common/autotest_common.sh@817 -- # '[' -z 70975 ']' 00:23:11.912 12:20:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.912 12:20:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:11.912 12:20:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.912 12:20:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:11.912 12:20:05 -- common/autotest_common.sh@10 -- # set +x 00:23:11.912 [2024-04-26 12:20:05.243828] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:11.912 [2024-04-26 12:20:05.244005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:23:12.169 [2024-04-26 12:20:05.396899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.169 [2024-04-26 12:20:05.510789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.103 12:20:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:13.103 12:20:06 -- common/autotest_common.sh@850 -- # return 0 00:23:13.103 12:20:06 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xj7XJu2Fhn 00:23:13.360 12:20:06 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:13.618 [2024-04-26 12:20:06.884136] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.618 nvme0n1 00:23:13.618 12:20:06 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.618 Running I/O for 1 seconds... 00:23:14.992 00:23:14.992 Latency(us) 00:23:14.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.992 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:14.992 Verification LBA range: start 0x0 length 0x2000 00:23:14.992 nvme0n1 : 1.02 3864.53 15.10 0.00 0.00 32771.19 6970.65 34317.03 00:23:14.992 =================================================================================================================== 00:23:14.992 Total : 3864.53 15.10 0.00 0.00 32771.19 6970.65 34317.03 00:23:14.992 0 00:23:14.992 12:20:08 -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:14.992 12:20:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.992 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:23:14.992 12:20:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.992 12:20:08 -- target/tls.sh@263 -- # tgtcfg='{ 00:23:14.992 "subsystems": [ 00:23:14.992 { 00:23:14.992 "subsystem": "keyring", 00:23:14.992 "config": [ 00:23:14.992 { 00:23:14.992 "method": "keyring_file_add_key", 00:23:14.992 "params": { 00:23:14.992 "name": "key0", 00:23:14.992 "path": "/tmp/tmp.xj7XJu2Fhn" 00:23:14.992 } 00:23:14.992 } 00:23:14.992 ] 00:23:14.992 }, 00:23:14.992 { 00:23:14.992 "subsystem": "iobuf", 00:23:14.992 "config": [ 00:23:14.992 { 00:23:14.992 "method": "iobuf_set_options", 00:23:14.992 "params": { 00:23:14.992 "small_pool_count": 8192, 00:23:14.992 "large_pool_count": 1024, 00:23:14.992 "small_bufsize": 8192, 00:23:14.992 "large_bufsize": 135168 00:23:14.992 } 00:23:14.992 } 00:23:14.992 ] 00:23:14.992 }, 00:23:14.992 { 00:23:14.992 "subsystem": "sock", 00:23:14.992 "config": [ 00:23:14.992 { 00:23:14.992 "method": "sock_impl_set_options", 00:23:14.992 "params": { 00:23:14.992 "impl_name": "uring", 00:23:14.992 "recv_buf_size": 2097152, 00:23:14.992 "send_buf_size": 2097152, 00:23:14.992 "enable_recv_pipe": true, 00:23:14.992 "enable_quickack": false, 00:23:14.992 "enable_placement_id": 0, 00:23:14.992 "enable_zerocopy_send_server": false, 00:23:14.992 "enable_zerocopy_send_client": false, 00:23:14.992 "zerocopy_threshold": 0, 00:23:14.992 "tls_version": 0, 00:23:14.992 "enable_ktls": false 00:23:14.992 } 00:23:14.992 }, 00:23:14.992 { 00:23:14.992 "method": "sock_impl_set_options", 00:23:14.992 "params": { 00:23:14.992 "impl_name": "posix", 00:23:14.992 "recv_buf_size": 2097152, 00:23:14.992 "send_buf_size": 2097152, 00:23:14.992 "enable_recv_pipe": true, 00:23:14.992 "enable_quickack": false, 00:23:14.992 "enable_placement_id": 0, 00:23:14.992 "enable_zerocopy_send_server": true, 00:23:14.992 "enable_zerocopy_send_client": false, 00:23:14.992 "zerocopy_threshold": 0, 00:23:14.992 "tls_version": 0, 00:23:14.992 "enable_ktls": false 00:23:14.992 } 00:23:14.992 }, 00:23:14.992 { 00:23:14.992 "method": "sock_impl_set_options", 00:23:14.992 "params": { 00:23:14.992 "impl_name": "ssl", 00:23:14.992 "recv_buf_size": 4096, 00:23:14.992 "send_buf_size": 4096, 00:23:14.992 "enable_recv_pipe": true, 00:23:14.992 "enable_quickack": false, 00:23:14.992 "enable_placement_id": 0, 00:23:14.992 "enable_zerocopy_send_server": true, 00:23:14.992 "enable_zerocopy_send_client": false, 00:23:14.992 "zerocopy_threshold": 0, 00:23:14.992 "tls_version": 0, 00:23:14.992 "enable_ktls": false 00:23:14.992 } 00:23:14.992 } 00:23:14.992 ] 00:23:14.992 }, 00:23:14.992 { 00:23:14.992 "subsystem": "vmd", 00:23:14.992 "config": [] 00:23:14.992 }, 00:23:14.992 { 00:23:14.992 "subsystem": "accel", 00:23:14.993 "config": [ 00:23:14.993 { 00:23:14.993 "method": "accel_set_options", 00:23:14.993 "params": { 00:23:14.993 "small_cache_size": 128, 00:23:14.993 "large_cache_size": 16, 00:23:14.993 "task_count": 2048, 00:23:14.993 "sequence_count": 2048, 00:23:14.993 "buf_count": 2048 00:23:14.993 } 00:23:14.993 } 00:23:14.993 ] 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "subsystem": "bdev", 00:23:14.993 "config": [ 00:23:14.993 { 00:23:14.993 "method": "bdev_set_options", 00:23:14.993 "params": { 00:23:14.993 "bdev_io_pool_size": 65535, 00:23:14.993 "bdev_io_cache_size": 256, 00:23:14.993 "bdev_auto_examine": true, 00:23:14.993 "iobuf_small_cache_size": 128, 00:23:14.993 "iobuf_large_cache_size": 16 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "bdev_raid_set_options", 00:23:14.993 "params": { 00:23:14.993 "process_window_size_kb": 1024 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "bdev_iscsi_set_options", 00:23:14.993 "params": { 00:23:14.993 "timeout_sec": 30 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "bdev_nvme_set_options", 00:23:14.993 "params": { 00:23:14.993 "action_on_timeout": "none", 00:23:14.993 "timeout_us": 0, 00:23:14.993 "timeout_admin_us": 0, 00:23:14.993 "keep_alive_timeout_ms": 10000, 00:23:14.993 "arbitration_burst": 0, 00:23:14.993 "low_priority_weight": 0, 00:23:14.993 "medium_priority_weight": 0, 00:23:14.993 "high_priority_weight": 0, 00:23:14.993 "nvme_adminq_poll_period_us": 10000, 00:23:14.993 "nvme_ioq_poll_period_us": 0, 00:23:14.993 "io_queue_requests": 0, 00:23:14.993 "delay_cmd_submit": true, 00:23:14.993 "transport_retry_count": 4, 00:23:14.993 "bdev_retry_count": 3, 00:23:14.993 "transport_ack_timeout": 0, 00:23:14.993 "ctrlr_loss_timeout_sec": 0, 00:23:14.993 "reconnect_delay_sec": 0, 00:23:14.993 "fast_io_fail_timeout_sec": 0, 00:23:14.993 "disable_auto_failback": false, 00:23:14.993 "generate_uuids": false, 00:23:14.993 "transport_tos": 0, 00:23:14.993 "nvme_error_stat": false, 00:23:14.993 "rdma_srq_size": 0, 00:23:14.993 "io_path_stat": false, 00:23:14.993 "allow_accel_sequence": false, 00:23:14.993 "rdma_max_cq_size": 0, 00:23:14.993 "rdma_cm_event_timeout_ms": 0, 00:23:14.993 "dhchap_digests": [ 00:23:14.993 "sha256", 00:23:14.993 "sha384", 00:23:14.993 "sha512" 00:23:14.993 ], 00:23:14.993 "dhchap_dhgroups": [ 00:23:14.993 "null", 00:23:14.993 "ffdhe2048", 00:23:14.993 "ffdhe3072", 00:23:14.993 "ffdhe4096", 00:23:14.993 "ffdhe6144", 00:23:14.993 "ffdhe8192" 00:23:14.993 ] 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "bdev_nvme_set_hotplug", 00:23:14.993 "params": { 00:23:14.993 "period_us": 100000, 00:23:14.993 "enable": false 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "bdev_malloc_create", 00:23:14.993 "params": { 00:23:14.993 "name": "malloc0", 00:23:14.993 "num_blocks": 8192, 00:23:14.993 "block_size": 4096, 00:23:14.993 "physical_block_size": 4096, 00:23:14.993 "uuid": "87f3cd80-4721-4293-b1e5-5f0c3ffca47b", 00:23:14.993 "optimal_io_boundary": 0 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "bdev_wait_for_examine" 00:23:14.993 } 00:23:14.993 ] 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "subsystem": "nbd", 00:23:14.993 "config": [] 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "subsystem": "scheduler", 00:23:14.993 "config": [ 00:23:14.993 { 00:23:14.993 "method": "framework_set_scheduler", 00:23:14.993 "params": { 00:23:14.993 "name": "static" 00:23:14.993 } 00:23:14.993 } 00:23:14.993 ] 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "subsystem": "nvmf", 00:23:14.993 "config": [ 00:23:14.993 { 00:23:14.993 "method": "nvmf_set_config", 00:23:14.993 "params": { 00:23:14.993 "discovery_filter": "match_any", 00:23:14.993 "admin_cmd_passthru": { 00:23:14.993 "identify_ctrlr": false 00:23:14.993 } 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_set_max_subsystems", 00:23:14.993 "params": { 00:23:14.993 "max_subsystems": 1024 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_set_crdt", 00:23:14.993 "params": { 00:23:14.993 "crdt1": 0, 00:23:14.993 "crdt2": 0, 00:23:14.993 "crdt3": 0 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_create_transport", 00:23:14.993 "params": { 00:23:14.993 "trtype": "TCP", 00:23:14.993 "max_queue_depth": 128, 00:23:14.993 "max_io_qpairs_per_ctrlr": 127, 00:23:14.993 "in_capsule_data_size": 4096, 00:23:14.993 "max_io_size": 131072, 00:23:14.993 "io_unit_size": 131072, 00:23:14.993 "max_aq_depth": 128, 00:23:14.993 "num_shared_buffers": 511, 00:23:14.993 "buf_cache_size": 4294967295, 00:23:14.993 "dif_insert_or_strip": false, 00:23:14.993 "zcopy": false, 00:23:14.993 "c2h_success": false, 00:23:14.993 "sock_priority": 0, 00:23:14.993 "abort_timeout_sec": 1, 00:23:14.993 "ack_timeout": 0, 00:23:14.993 "data_wr_pool_size": 0 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_create_subsystem", 00:23:14.993 "params": { 00:23:14.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.993 "allow_any_host": false, 00:23:14.993 "serial_number": "00000000000000000000", 00:23:14.993 "model_number": "SPDK bdev Controller", 00:23:14.993 "max_namespaces": 32, 00:23:14.993 "min_cntlid": 1, 00:23:14.993 "max_cntlid": 65519, 00:23:14.993 "ana_reporting": false 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_subsystem_add_host", 00:23:14.993 "params": { 00:23:14.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.993 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.993 "psk": "key0" 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_subsystem_add_ns", 00:23:14.993 "params": { 00:23:14.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.993 "namespace": { 00:23:14.993 "nsid": 1, 00:23:14.993 "bdev_name": "malloc0", 00:23:14.993 "nguid": "87F3CD8047214293B1E55F0C3FFCA47B", 00:23:14.993 "uuid": "87f3cd80-4721-4293-b1e5-5f0c3ffca47b", 00:23:14.993 "no_auto_visible": false 00:23:14.993 } 00:23:14.993 } 00:23:14.993 }, 00:23:14.993 { 00:23:14.993 "method": "nvmf_subsystem_add_listener", 00:23:14.993 "params": { 00:23:14.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.993 "listen_address": { 00:23:14.993 "trtype": "TCP", 00:23:14.993 "adrfam": "IPv4", 00:23:14.993 "traddr": "10.0.0.2", 00:23:14.993 "trsvcid": "4420" 00:23:14.993 }, 00:23:14.993 "secure_channel": true 00:23:14.993 } 00:23:14.993 } 00:23:14.993 ] 00:23:14.993 } 00:23:14.993 ] 00:23:14.993 }' 00:23:14.993 12:20:08 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.252 12:20:08 -- target/tls.sh@264 -- # bperfcfg='{ 00:23:15.252 "subsystems": [ 00:23:15.252 { 00:23:15.252 "subsystem": "keyring", 00:23:15.252 "config": [ 00:23:15.252 { 00:23:15.252 "method": "keyring_file_add_key", 00:23:15.252 "params": { 00:23:15.252 "name": "key0", 00:23:15.252 "path": "/tmp/tmp.xj7XJu2Fhn" 00:23:15.252 } 00:23:15.252 } 00:23:15.252 ] 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "subsystem": "iobuf", 00:23:15.252 "config": [ 00:23:15.252 { 00:23:15.252 "method": "iobuf_set_options", 00:23:15.252 "params": { 00:23:15.252 "small_pool_count": 8192, 00:23:15.252 "large_pool_count": 1024, 00:23:15.252 "small_bufsize": 8192, 00:23:15.252 "large_bufsize": 135168 00:23:15.252 } 00:23:15.252 } 00:23:15.252 ] 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "subsystem": "sock", 00:23:15.252 "config": [ 00:23:15.252 { 00:23:15.252 "method": "sock_impl_set_options", 00:23:15.252 "params": { 00:23:15.252 "impl_name": "uring", 00:23:15.252 "recv_buf_size": 2097152, 00:23:15.252 "send_buf_size": 2097152, 00:23:15.252 "enable_recv_pipe": true, 00:23:15.252 "enable_quickack": false, 00:23:15.252 "enable_placement_id": 0, 00:23:15.252 "enable_zerocopy_send_server": false, 00:23:15.252 "enable_zerocopy_send_client": false, 00:23:15.252 "zerocopy_threshold": 0, 00:23:15.252 "tls_version": 0, 00:23:15.252 "enable_ktls": false 00:23:15.252 } 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "method": "sock_impl_set_options", 00:23:15.252 "params": { 00:23:15.252 "impl_name": "posix", 00:23:15.252 "recv_buf_size": 2097152, 00:23:15.252 "send_buf_size": 2097152, 00:23:15.252 "enable_recv_pipe": true, 00:23:15.252 "enable_quickack": false, 00:23:15.252 "enable_placement_id": 0, 00:23:15.252 "enable_zerocopy_send_server": true, 00:23:15.252 "enable_zerocopy_send_client": false, 00:23:15.252 "zerocopy_threshold": 0, 00:23:15.252 "tls_version": 0, 00:23:15.252 "enable_ktls": false 00:23:15.252 } 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "method": "sock_impl_set_options", 00:23:15.252 "params": { 00:23:15.252 "impl_name": "ssl", 00:23:15.252 "recv_buf_size": 4096, 00:23:15.252 "send_buf_size": 4096, 00:23:15.252 "enable_recv_pipe": true, 00:23:15.252 "enable_quickack": false, 00:23:15.252 "enable_placement_id": 0, 00:23:15.252 "enable_zerocopy_send_server": true, 00:23:15.252 "enable_zerocopy_send_client": false, 00:23:15.252 "zerocopy_threshold": 0, 00:23:15.252 "tls_version": 0, 00:23:15.252 "enable_ktls": false 00:23:15.252 } 00:23:15.252 } 00:23:15.252 ] 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "subsystem": "vmd", 00:23:15.252 "config": [] 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "subsystem": "accel", 00:23:15.252 "config": [ 00:23:15.252 { 00:23:15.252 "method": "accel_set_options", 00:23:15.252 "params": { 00:23:15.252 "small_cache_size": 128, 00:23:15.252 "large_cache_size": 16, 00:23:15.252 "task_count": 2048, 00:23:15.252 "sequence_count": 2048, 00:23:15.252 "buf_count": 2048 00:23:15.252 } 00:23:15.252 } 00:23:15.252 ] 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "subsystem": "bdev", 00:23:15.252 "config": [ 00:23:15.252 { 00:23:15.252 "method": "bdev_set_options", 00:23:15.252 "params": { 00:23:15.252 "bdev_io_pool_size": 65535, 00:23:15.252 "bdev_io_cache_size": 256, 00:23:15.252 "bdev_auto_examine": true, 00:23:15.252 "iobuf_small_cache_size": 128, 00:23:15.252 "iobuf_large_cache_size": 16 00:23:15.252 } 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "method": "bdev_raid_set_options", 00:23:15.252 "params": { 00:23:15.252 "process_window_size_kb": 1024 00:23:15.252 } 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "method": "bdev_iscsi_set_options", 00:23:15.252 "params": { 00:23:15.252 "timeout_sec": 30 00:23:15.252 } 00:23:15.252 }, 00:23:15.252 { 00:23:15.252 "method": "bdev_nvme_set_options", 00:23:15.252 "params": { 00:23:15.252 "action_on_timeout": "none", 00:23:15.252 "timeout_us": 0, 00:23:15.252 "timeout_admin_us": 0, 00:23:15.252 "keep_alive_timeout_ms": 10000, 00:23:15.252 "arbitration_burst": 0, 00:23:15.252 "low_priority_weight": 0, 00:23:15.252 "medium_priority_weight": 0, 00:23:15.252 "high_priority_weight": 0, 00:23:15.252 "nvme_adminq_poll_period_us": 10000, 00:23:15.252 "nvme_ioq_poll_period_us": 0, 00:23:15.252 "io_queue_requests": 512, 00:23:15.252 "delay_cmd_submit": true, 00:23:15.252 "transport_retry_count": 4, 00:23:15.252 "bdev_retry_count": 3, 00:23:15.252 "transport_ack_timeout": 0, 00:23:15.252 "ctrlr_loss_timeout_sec": 0, 00:23:15.252 "reconnect_delay_sec": 0, 00:23:15.252 "fast_io_fail_timeout_sec": 0, 00:23:15.252 "disable_auto_failback": false, 00:23:15.252 "generate_uuids": false, 00:23:15.252 "transport_tos": 0, 00:23:15.252 "nvme_error_stat": false, 00:23:15.252 "rdma_srq_size": 0, 00:23:15.252 "io_path_stat": false, 00:23:15.252 "allow_accel_sequence": false, 00:23:15.252 "rdma_max_cq_size": 0, 00:23:15.252 "rdma_cm_event_timeout_ms": 0, 00:23:15.252 "dhchap_digests": [ 00:23:15.253 "sha256", 00:23:15.253 "sha384", 00:23:15.253 "sha512" 00:23:15.253 ], 00:23:15.253 "dhchap_dhgroups": [ 00:23:15.253 "null", 00:23:15.253 "ffdhe2048", 00:23:15.253 "ffdhe3072", 00:23:15.253 "ffdhe4096", 00:23:15.253 "ffdhe6144", 00:23:15.253 "ffdhe8192" 00:23:15.253 ] 00:23:15.253 } 00:23:15.253 }, 00:23:15.253 { 00:23:15.253 "method": "bdev_nvme_attach_controller", 00:23:15.253 "params": { 00:23:15.253 "name": "nvme0", 00:23:15.253 "trtype": "TCP", 00:23:15.253 "adrfam": "IPv4", 00:23:15.253 "traddr": "10.0.0.2", 00:23:15.253 "trsvcid": "4420", 00:23:15.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.253 "prchk_reftag": false, 00:23:15.253 "prchk_guard": false, 00:23:15.253 "ctrlr_loss_timeout_sec": 0, 00:23:15.253 "reconnect_delay_sec": 0, 00:23:15.253 "fast_io_fail_timeout_sec": 0, 00:23:15.253 "psk": "key0", 00:23:15.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.253 "hdgst": false, 00:23:15.253 "ddgst": false 00:23:15.253 } 00:23:15.253 }, 00:23:15.253 { 00:23:15.253 "method": "bdev_nvme_set_hotplug", 00:23:15.253 "params": { 00:23:15.253 "period_us": 100000, 00:23:15.253 "enable": false 00:23:15.253 } 00:23:15.253 }, 00:23:15.253 { 00:23:15.253 "method": "bdev_enable_histogram", 00:23:15.253 "params": { 00:23:15.253 "name": "nvme0n1", 00:23:15.253 "enable": true 00:23:15.253 } 00:23:15.253 }, 00:23:15.253 { 00:23:15.253 "method": "bdev_wait_for_examine" 00:23:15.253 } 00:23:15.253 ] 00:23:15.253 }, 00:23:15.253 { 00:23:15.253 "subsystem": "nbd", 00:23:15.253 "config": [] 00:23:15.253 } 00:23:15.253 ] 00:23:15.253 }' 00:23:15.253 12:20:08 -- target/tls.sh@266 -- # killprocess 70975 00:23:15.253 12:20:08 -- common/autotest_common.sh@936 -- # '[' -z 70975 ']' 00:23:15.253 12:20:08 -- common/autotest_common.sh@940 -- # kill -0 70975 00:23:15.253 12:20:08 -- common/autotest_common.sh@941 -- # uname 00:23:15.253 12:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.253 12:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70975 00:23:15.253 killing process with pid 70975 00:23:15.253 Received shutdown signal, test time was about 1.000000 seconds 00:23:15.253 00:23:15.253 Latency(us) 00:23:15.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.253 =================================================================================================================== 00:23:15.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.253 12:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:15.253 12:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:15.253 12:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70975' 00:23:15.253 12:20:08 -- common/autotest_common.sh@955 -- # kill 70975 00:23:15.253 12:20:08 -- common/autotest_common.sh@960 -- # wait 70975 00:23:15.511 12:20:08 -- target/tls.sh@267 -- # killprocess 70943 00:23:15.511 12:20:08 -- common/autotest_common.sh@936 -- # '[' -z 70943 ']' 00:23:15.511 12:20:08 -- common/autotest_common.sh@940 -- # kill -0 70943 00:23:15.511 12:20:08 -- common/autotest_common.sh@941 -- # uname 00:23:15.511 12:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.511 12:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70943 00:23:15.511 killing process with pid 70943 00:23:15.511 12:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:15.511 12:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:15.511 12:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70943' 00:23:15.511 12:20:08 -- common/autotest_common.sh@955 -- # kill 70943 00:23:15.511 12:20:08 -- common/autotest_common.sh@960 -- # wait 70943 00:23:15.770 12:20:09 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:15.770 12:20:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:15.770 12:20:09 -- target/tls.sh@269 -- # echo '{ 00:23:15.770 "subsystems": [ 00:23:15.770 { 00:23:15.770 "subsystem": "keyring", 00:23:15.770 "config": [ 00:23:15.770 { 00:23:15.770 "method": "keyring_file_add_key", 00:23:15.770 "params": { 00:23:15.770 "name": "key0", 00:23:15.770 "path": "/tmp/tmp.xj7XJu2Fhn" 00:23:15.770 } 00:23:15.770 } 00:23:15.770 ] 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "subsystem": "iobuf", 00:23:15.770 "config": [ 00:23:15.770 { 00:23:15.770 "method": "iobuf_set_options", 00:23:15.770 "params": { 00:23:15.770 "small_pool_count": 8192, 00:23:15.770 "large_pool_count": 1024, 00:23:15.770 "small_bufsize": 8192, 00:23:15.770 "large_bufsize": 135168 00:23:15.770 } 00:23:15.770 } 00:23:15.770 ] 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "subsystem": "sock", 00:23:15.770 "config": [ 00:23:15.770 { 00:23:15.770 "method": "sock_impl_set_options", 00:23:15.770 "params": { 00:23:15.770 "impl_name": "uring", 00:23:15.770 "recv_buf_size": 2097152, 00:23:15.770 "send_buf_size": 2097152, 00:23:15.770 "enable_recv_pipe": true, 00:23:15.770 "enable_quickack": false, 00:23:15.770 "enable_placement_id": 0, 00:23:15.770 "enable_zerocopy_send_server": false, 00:23:15.770 "enable_zerocopy_send_client": false, 00:23:15.770 "zerocopy_threshold": 0, 00:23:15.770 "tls_version": 0, 00:23:15.770 "enable_ktls": false 00:23:15.770 } 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "method": "sock_impl_set_options", 00:23:15.770 "params": { 00:23:15.770 "impl_name": "posix", 00:23:15.770 "recv_buf_size": 2097152, 00:23:15.770 "send_buf_size": 2097152, 00:23:15.770 "enable_recv_pipe": true, 00:23:15.770 "enable_quickack": false, 00:23:15.770 "enable_placement_id": 0, 00:23:15.770 "enable_zerocopy_send_server": true, 00:23:15.770 "enable_zerocopy_send_client": false, 00:23:15.770 "zerocopy_threshold": 0, 00:23:15.770 "tls_version": 0, 00:23:15.770 "enable_ktls": false 00:23:15.770 } 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "method": "sock_impl_set_options", 00:23:15.770 "params": { 00:23:15.770 "impl_name": "ssl", 00:23:15.770 "recv_buf_size": 4096, 00:23:15.770 "send_buf_size": 4096, 00:23:15.770 "enable_recv_pipe": true, 00:23:15.770 "enable_quickack": false, 00:23:15.770 "enable_placement_id": 0, 00:23:15.770 "enable_zerocopy_send_server": true, 00:23:15.770 "enable_zerocopy_send_client": false, 00:23:15.770 "zerocopy_threshold": 0, 00:23:15.770 "tls_version": 0, 00:23:15.770 "enable_ktls": false 00:23:15.770 } 00:23:15.770 } 00:23:15.770 ] 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "subsystem": "vmd", 00:23:15.770 "config": [] 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "subsystem": "accel", 00:23:15.770 "config": [ 00:23:15.770 { 00:23:15.770 "method": "accel_set_options", 00:23:15.770 "params": { 00:23:15.770 "small_cache_size": 128, 00:23:15.770 "large_cache_size": 16, 00:23:15.770 "task_count": 2048, 00:23:15.770 "sequence_count": 2048, 00:23:15.770 "buf_count": 2048 00:23:15.770 } 00:23:15.770 } 00:23:15.770 ] 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "subsystem": "bdev", 00:23:15.770 "config": [ 00:23:15.770 { 00:23:15.770 "method": "bdev_set_options", 00:23:15.770 "params": { 00:23:15.770 "bdev_io_pool_size": 65535, 00:23:15.770 "bdev_io_cache_size": 256, 00:23:15.770 "bdev_auto_examine": true, 00:23:15.770 "iobuf_small_cache_size": 128, 00:23:15.770 "iobuf_large_cache_size": 16 00:23:15.770 } 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "method": "bdev_raid_set_options", 00:23:15.770 "params": { 00:23:15.770 "process_window_size_kb": 1024 00:23:15.770 } 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "method": "bdev_iscsi_set_options", 00:23:15.770 "params": { 00:23:15.770 "timeout_sec": 30 00:23:15.770 } 00:23:15.770 }, 00:23:15.770 { 00:23:15.770 "method": "bdev_nvme_set_options", 00:23:15.770 "params": { 00:23:15.770 "action_on_timeout": "none", 00:23:15.770 "timeout_us": 0, 00:23:15.770 "timeout_admin_us": 0, 00:23:15.770 "keep_alive_timeout_ms": 10000, 00:23:15.770 "arbitration_burst": 0, 00:23:15.770 "low_priority_weight": 0, 00:23:15.770 "medium_priority_weight": 0, 00:23:15.770 "high_priority_weight": 0, 00:23:15.770 "nvme_adminq_poll_period_us": 10000, 00:23:15.770 "nvme_ioq_poll_period_us": 0, 00:23:15.770 "io_queue_requests": 0, 00:23:15.770 "delay_cmd_submit": true, 00:23:15.771 "transport_retry_count": 4, 00:23:15.771 "bdev_retry_count": 3, 00:23:15.771 "transport_ack_timeout": 0, 00:23:15.771 "ctrlr_loss_timeout_sec": 0, 00:23:15.771 "reconnect_delay_sec": 0, 00:23:15.771 "fast_io_fail_timeout_sec": 0, 00:23:15.771 "disable_auto_failback": false, 00:23:15.771 "generate_uuids": false, 00:23:15.771 "transport_tos": 0, 00:23:15.771 "nvme_error_stat": false, 00:23:15.771 "rdma_srq_size": 0, 00:23:15.771 "io_path_stat": false, 00:23:15.771 "allow_accel_sequence": false, 00:23:15.771 "rdma_max_cq_size": 0, 00:23:15.771 "rdma_cm_event_timeout_ms": 0, 00:23:15.771 "dhchap_digests": [ 00:23:15.771 "sha256", 00:23:15.771 "sha384", 00:23:15.771 "sha512" 00:23:15.771 ], 00:23:15.771 "dhchap_dhgroups": [ 00:23:15.771 "null", 00:23:15.771 "ffdhe2048", 00:23:15.771 "ffdhe3072", 00:23:15.771 "ffdhe4096", 00:23:15.771 "ffdhe6144", 00:23:15.771 "ffdhe8192" 00:23:15.771 ] 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "bdev_nvme_set_hotplug", 00:23:15.771 "params": { 00:23:15.771 "period_us": 100000, 00:23:15.771 "enable": false 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "bdev_malloc_create", 00:23:15.771 "params": { 00:23:15.771 "name": "malloc0", 00:23:15.771 "num_blocks": 8192, 00:23:15.771 "block_size": 4096, 00:23:15.771 "physical_block_size": 4096, 00:23:15.771 "uuid": "87f3cd80-4721-4293-b1e5-5f0c3ffca47b", 00:23:15.771 "optimal_io_boundary": 0 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "bdev_wait_for_examine" 00:23:15.771 } 00:23:15.771 ] 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "subsystem": "nbd", 00:23:15.771 "config": [] 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "subsystem": "scheduler", 00:23:15.771 "config": [ 00:23:15.771 { 00:23:15.771 "method": "framework_set_scheduler", 00:23:15.771 "params": { 00:23:15.771 "name": "static" 00:23:15.771 } 00:23:15.771 } 00:23:15.771 ] 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "subsystem": "nvmf", 00:23:15.771 "config": [ 00:23:15.771 { 00:23:15.771 "method": "nvmf_set_config", 00:23:15.771 "params": { 00:23:15.771 "discovery_filter": "match_any", 00:23:15.771 "admin_cmd_passthru": { 00:23:15.771 "identify_ctrlr": false 00:23:15.771 } 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_set_max_subsystems", 00:23:15.771 "params": { 00:23:15.771 "max_subsystems": 1024 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_set_crdt", 00:23:15.771 "params": { 00:23:15.771 "crdt1": 0, 00:23:15.771 "crdt2": 0, 00:23:15.771 "crdt3": 0 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_create_transport", 00:23:15.771 "params": { 00:23:15.771 "trtype": "TCP", 00:23:15.771 "max_queue_depth": 128, 00:23:15.771 "max_io_qpairs_per_ctrlr": 127, 00:23:15.771 "in_capsule_data_size": 4096, 00:23:15.771 "max_io_size": 131072, 00:23:15.771 "io_unit_size": 131072, 00:23:15.771 "max_aq_depth": 128, 00:23:15.771 "num_shared_buffers": 511, 00:23:15.771 "buf_cache_size": 4294967295, 00:23:15.771 "dif_insert_or_strip": false, 00:23:15.771 "zcopy": false, 00:23:15.771 "c2h_success": false, 00:23:15.771 "sock_priority": 0, 00:23:15.771 "abort_timeout_sec": 1, 00:23:15.771 "ack_timeout": 0, 00:23:15.771 "data_wr_pool_size": 0 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_create_subsystem", 00:23:15.771 "params": { 00:23:15.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.771 "allow_any_host": false, 00:23:15.771 "serial_number": "00000000000000000000", 00:23:15.771 "model_number": "SPDK bdev Controller", 00:23:15.771 "max_namespaces": 32, 00:23:15.771 "min_cntlid": 1, 00:23:15.771 "max_cntlid": 65519, 00:23:15.771 "ana_reporting": false 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_subsystem_add_host", 00:23:15.771 "params": { 00:23:15.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.771 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.771 "psk": "key0" 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_subsystem_add_ns", 00:23:15.771 "params": { 00:23:15.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.771 "namespace": { 00:23:15.771 "nsid": 1, 00:23:15.771 "bdev_name": "malloc0", 00:23:15.771 "nguid": "87F3CD8047214293B1E55F0C3FFCA47B", 00:23:15.771 "uuid": "87f3cd80-4721-4293-b1e5-5f0c3ffca47b", 00:23:15.771 "no_auto_visible": false 00:23:15.771 } 00:23:15.771 } 00:23:15.771 }, 00:23:15.771 { 00:23:15.771 "method": "nvmf_subsystem_add_listener", 00:23:15.771 "params": { 00:23:15.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.771 "listen_address": { 00:23:15.771 "trtype": "TCP", 00:23:15.771 "adrfam": "IPv4", 00:23:15.771 "traddr": "10.0.0.2", 00:23:15.771 "trsvcid": "4420" 00:23:15.771 }, 00:23:15.771 "secure_channel": true 00:23:15.771 } 00:23:15.771 } 00:23:15.771 ] 00:23:15.771 } 00:23:15.771 ] 00:23:15.771 }' 00:23:15.771 12:20:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:15.771 12:20:09 -- common/autotest_common.sh@10 -- # set +x 00:23:15.771 12:20:09 -- nvmf/common.sh@470 -- # nvmfpid=71041 00:23:15.771 12:20:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:15.771 12:20:09 -- nvmf/common.sh@471 -- # waitforlisten 71041 00:23:15.771 12:20:09 -- common/autotest_common.sh@817 -- # '[' -z 71041 ']' 00:23:15.771 12:20:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.771 12:20:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:15.771 12:20:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.771 12:20:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:15.771 12:20:09 -- common/autotest_common.sh@10 -- # set +x 00:23:15.771 [2024-04-26 12:20:09.215768] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:15.771 [2024-04-26 12:20:09.215878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.029 [2024-04-26 12:20:09.356513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.029 [2024-04-26 12:20:09.473959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.029 [2024-04-26 12:20:09.474020] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.029 [2024-04-26 12:20:09.474033] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.030 [2024-04-26 12:20:09.474041] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.030 [2024-04-26 12:20:09.474049] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.030 [2024-04-26 12:20:09.474182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.288 [2024-04-26 12:20:09.719235] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.288 [2024-04-26 12:20:09.751140] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.288 [2024-04-26 12:20:09.751418] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.856 12:20:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:16.856 12:20:10 -- common/autotest_common.sh@850 -- # return 0 00:23:16.856 12:20:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:16.856 12:20:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:16.856 12:20:10 -- common/autotest_common.sh@10 -- # set +x 00:23:16.856 12:20:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.856 12:20:10 -- target/tls.sh@272 -- # bdevperf_pid=71073 00:23:16.856 12:20:10 -- target/tls.sh@273 -- # waitforlisten 71073 /var/tmp/bdevperf.sock 00:23:16.856 12:20:10 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:16.856 12:20:10 -- common/autotest_common.sh@817 -- # '[' -z 71073 ']' 00:23:16.856 12:20:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.856 12:20:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:16.856 12:20:10 -- target/tls.sh@270 -- # echo '{ 00:23:16.856 "subsystems": [ 00:23:16.856 { 00:23:16.856 "subsystem": "keyring", 00:23:16.856 "config": [ 00:23:16.856 { 00:23:16.856 "method": "keyring_file_add_key", 00:23:16.856 "params": { 00:23:16.856 "name": "key0", 00:23:16.856 "path": "/tmp/tmp.xj7XJu2Fhn" 00:23:16.856 } 00:23:16.856 } 00:23:16.856 ] 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "subsystem": "iobuf", 00:23:16.856 "config": [ 00:23:16.856 { 00:23:16.856 "method": "iobuf_set_options", 00:23:16.856 "params": { 00:23:16.856 "small_pool_count": 8192, 00:23:16.856 "large_pool_count": 1024, 00:23:16.856 "small_bufsize": 8192, 00:23:16.856 "large_bufsize": 135168 00:23:16.856 } 00:23:16.856 } 00:23:16.856 ] 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "subsystem": "sock", 00:23:16.856 "config": [ 00:23:16.856 { 00:23:16.856 "method": "sock_impl_set_options", 00:23:16.856 "params": { 00:23:16.856 "impl_name": "uring", 00:23:16.856 "recv_buf_size": 2097152, 00:23:16.856 "send_buf_size": 2097152, 00:23:16.856 "enable_recv_pipe": true, 00:23:16.856 "enable_quickack": false, 00:23:16.856 "enable_placement_id": 0, 00:23:16.856 "enable_zerocopy_send_server": false, 00:23:16.856 "enable_zerocopy_send_client": false, 00:23:16.856 "zerocopy_threshold": 0, 00:23:16.856 "tls_version": 0, 00:23:16.856 "enable_ktls": false 00:23:16.856 } 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "method": "sock_impl_set_options", 00:23:16.856 "params": { 00:23:16.856 "impl_name": "posix", 00:23:16.856 "recv_buf_size": 2097152, 00:23:16.856 "send_buf_size": 2097152, 00:23:16.856 "enable_recv_pipe": true, 00:23:16.856 "enable_quickack": false, 00:23:16.856 "enable_placement_id": 0, 00:23:16.856 "enable_zerocopy_send_server": true, 00:23:16.856 "enable_zerocopy_send_client": false, 00:23:16.856 "zerocopy_threshold": 0, 00:23:16.856 "tls_version": 0, 00:23:16.856 "enable_ktls": false 00:23:16.856 } 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "method": "sock_impl_set_options", 00:23:16.856 "params": { 00:23:16.856 "impl_name": "ssl", 00:23:16.856 "recv_buf_size": 4096, 00:23:16.856 "send_buf_size": 4096, 00:23:16.856 "enable_recv_pipe": true, 00:23:16.856 "enable_quickack": false, 00:23:16.856 "enable_placement_id": 0, 00:23:16.856 "enable_zerocopy_send_server": true, 00:23:16.856 "enable_zerocopy_send_client": false, 00:23:16.856 "zerocopy_threshold": 0, 00:23:16.856 "tls_version": 0, 00:23:16.856 "enable_ktls": false 00:23:16.856 } 00:23:16.856 } 00:23:16.856 ] 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "subsystem": "vmd", 00:23:16.856 "config": [] 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "subsystem": "accel", 00:23:16.856 "config": [ 00:23:16.856 { 00:23:16.856 "method": "accel_set_options", 00:23:16.856 "params": { 00:23:16.856 "small_cache_size": 128, 00:23:16.856 "large_cache_size": 16, 00:23:16.856 "task_count": 2048, 00:23:16.856 "sequence_count": 2048, 00:23:16.856 "buf_count": 2048 00:23:16.856 } 00:23:16.856 } 00:23:16.856 ] 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "subsystem": "bdev", 00:23:16.856 "config": [ 00:23:16.856 { 00:23:16.856 "method": "bdev_set_options", 00:23:16.856 "params": { 00:23:16.856 "bdev_io_pool_size": 65535, 00:23:16.856 "bdev_io_cache_size": 256, 00:23:16.856 "bdev_auto_examine": true, 00:23:16.856 "iobuf_small_cache_size": 128, 00:23:16.856 "iobuf_large_cache_size": 16 00:23:16.856 } 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "method": "bdev_raid_set_options", 00:23:16.856 "params": { 00:23:16.856 "process_window_size_kb": 1024 00:23:16.856 } 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "method": "bdev_iscsi_set_options", 00:23:16.856 "params": { 00:23:16.856 "timeout_sec": 30 00:23:16.856 } 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "method": "bdev_nvme_set_options", 00:23:16.856 "params": { 00:23:16.856 "action_on_timeout": "none", 00:23:16.856 "timeout_us": 0, 00:23:16.856 "timeout_admin_us": 0, 00:23:16.856 "keep_alive_timeout_ms": 10000, 00:23:16.856 "arbitration_burst": 0, 00:23:16.856 "low_priority_weight": 0, 00:23:16.856 "medium_priority_weight": 0, 00:23:16.856 "high_priority_weight": 0, 00:23:16.856 "nvme_adminq_poll_period_us": 10000, 00:23:16.856 "nvme_ioq_poll_period_us": 0, 00:23:16.856 "io_queue_requests": 512, 00:23:16.856 "delay_cmd_submit": true, 00:23:16.856 "transport_retry_count": 4, 00:23:16.856 "bdev_retry_count": 3, 00:23:16.856 "transport_ack_timeout": 0, 00:23:16.856 "ctrlr_loss_timeout_sec": 0, 00:23:16.856 "reconnect_delay_sec": 0, 00:23:16.856 "fast_io_fail_timeout_sec": 0, 00:23:16.856 "disable_auto_failback": false, 00:23:16.856 "generaWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.856 12:20:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.856 12:20:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:16.856 12:20:10 -- common/autotest_common.sh@10 -- # set +x 00:23:16.856 te_uuids": false, 00:23:16.856 "transport_tos": 0, 00:23:16.856 "nvme_error_stat": false, 00:23:16.856 "rdma_srq_size": 0, 00:23:16.856 "io_path_stat": false, 00:23:16.856 "allow_accel_sequence": false, 00:23:16.856 "rdma_max_cq_size": 0, 00:23:16.856 "rdma_cm_event_timeout_ms": 0, 00:23:16.856 "dhchap_digests": [ 00:23:16.856 "sha256", 00:23:16.856 "sha384", 00:23:16.856 "sha512" 00:23:16.856 ], 00:23:16.856 "dhchap_dhgroups": [ 00:23:16.856 "null", 00:23:16.856 "ffdhe2048", 00:23:16.856 "ffdhe3072", 00:23:16.856 "ffdhe4096", 00:23:16.856 "ffdhe6144", 00:23:16.856 "ffdhe8192" 00:23:16.856 ] 00:23:16.856 } 00:23:16.856 }, 00:23:16.856 { 00:23:16.856 "method": "bdev_nvme_attach_controller", 00:23:16.856 "params": { 00:23:16.856 "name": "nvme0", 00:23:16.856 "trtype": "TCP", 00:23:16.856 "adrfam": "IPv4", 00:23:16.856 "traddr": "10.0.0.2", 00:23:16.856 "trsvcid": "4420", 00:23:16.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.856 "prchk_reftag": false, 00:23:16.856 "prchk_guard": false, 00:23:16.856 "ctrlr_loss_timeout_sec": 0, 00:23:16.856 "reconnect_delay_sec": 0, 00:23:16.856 "fast_io_fail_timeout_sec": 0, 00:23:16.856 "psk": "key0", 00:23:16.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.857 "hdgst": false, 00:23:16.857 "ddgst": false 00:23:16.857 } 00:23:16.857 }, 00:23:16.857 { 00:23:16.857 "method": "bdev_nvme_set_hotplug", 00:23:16.857 "params": { 00:23:16.857 "period_us": 100000, 00:23:16.857 "enable": false 00:23:16.857 } 00:23:16.857 }, 00:23:16.857 { 00:23:16.857 "method": "bdev_enable_histogram", 00:23:16.857 "params": { 00:23:16.857 "name": "nvme0n1", 00:23:16.857 "enable": true 00:23:16.857 } 00:23:16.857 }, 00:23:16.857 { 00:23:16.857 "method": "bdev_wait_for_examine" 00:23:16.857 } 00:23:16.857 ] 00:23:16.857 }, 00:23:16.857 { 00:23:16.857 "subsystem": "nbd", 00:23:16.857 "config": [] 00:23:16.857 } 00:23:16.857 ] 00:23:16.857 }' 00:23:16.857 [2024-04-26 12:20:10.317322] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:16.857 [2024-04-26 12:20:10.318471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71073 ] 00:23:17.116 [2024-04-26 12:20:10.466795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.374 [2024-04-26 12:20:10.595837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.374 [2024-04-26 12:20:10.773373] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.941 12:20:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:17.941 12:20:11 -- common/autotest_common.sh@850 -- # return 0 00:23:17.941 12:20:11 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.941 12:20:11 -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:18.200 12:20:11 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.200 12:20:11 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.200 Running I/O for 1 seconds... 00:23:19.574 00:23:19.574 Latency(us) 00:23:19.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.574 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.574 Verification LBA range: start 0x0 length 0x2000 00:23:19.574 nvme0n1 : 1.03 3952.55 15.44 0.00 0.00 31942.27 7447.27 20256.58 00:23:19.574 =================================================================================================================== 00:23:19.574 Total : 3952.55 15.44 0.00 0.00 31942.27 7447.27 20256.58 00:23:19.574 0 00:23:19.574 12:20:12 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:19.574 12:20:12 -- target/tls.sh@279 -- # cleanup 00:23:19.574 12:20:12 -- target/tls.sh@15 -- # process_shm --id 0 00:23:19.574 12:20:12 -- common/autotest_common.sh@794 -- # type=--id 00:23:19.574 12:20:12 -- common/autotest_common.sh@795 -- # id=0 00:23:19.574 12:20:12 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:19.574 12:20:12 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:19.574 12:20:12 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:19.574 12:20:12 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:19.574 12:20:12 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:19.574 12:20:12 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:19.574 nvmf_trace.0 00:23:19.574 12:20:12 -- common/autotest_common.sh@809 -- # return 0 00:23:19.574 12:20:12 -- target/tls.sh@16 -- # killprocess 71073 00:23:19.574 12:20:12 -- common/autotest_common.sh@936 -- # '[' -z 71073 ']' 00:23:19.574 12:20:12 -- common/autotest_common.sh@940 -- # kill -0 71073 00:23:19.574 12:20:12 -- common/autotest_common.sh@941 -- # uname 00:23:19.574 12:20:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.574 12:20:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71073 00:23:19.574 killing process with pid 71073 00:23:19.574 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.574 00:23:19.574 Latency(us) 00:23:19.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.574 =================================================================================================================== 00:23:19.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.574 12:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:19.574 12:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:19.574 12:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71073' 00:23:19.574 12:20:12 -- common/autotest_common.sh@955 -- # kill 71073 00:23:19.574 12:20:12 -- common/autotest_common.sh@960 -- # wait 71073 00:23:19.574 12:20:12 -- target/tls.sh@17 -- # nvmftestfini 00:23:19.574 12:20:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:19.574 12:20:12 -- nvmf/common.sh@117 -- # sync 00:23:19.574 12:20:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.574 12:20:13 -- nvmf/common.sh@120 -- # set +e 00:23:19.574 12:20:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.574 12:20:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.574 rmmod nvme_tcp 00:23:19.833 rmmod nvme_fabrics 00:23:19.833 rmmod nvme_keyring 00:23:19.833 12:20:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.833 12:20:13 -- nvmf/common.sh@124 -- # set -e 00:23:19.833 12:20:13 -- nvmf/common.sh@125 -- # return 0 00:23:19.833 12:20:13 -- nvmf/common.sh@478 -- # '[' -n 71041 ']' 00:23:19.833 12:20:13 -- nvmf/common.sh@479 -- # killprocess 71041 00:23:19.833 12:20:13 -- common/autotest_common.sh@936 -- # '[' -z 71041 ']' 00:23:19.833 12:20:13 -- common/autotest_common.sh@940 -- # kill -0 71041 00:23:19.833 12:20:13 -- common/autotest_common.sh@941 -- # uname 00:23:19.833 12:20:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.833 12:20:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71041 00:23:19.833 killing process with pid 71041 00:23:19.833 12:20:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:19.833 12:20:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:19.833 12:20:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71041' 00:23:19.833 12:20:13 -- common/autotest_common.sh@955 -- # kill 71041 00:23:19.833 12:20:13 -- common/autotest_common.sh@960 -- # wait 71041 00:23:20.091 12:20:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:20.091 12:20:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:20.091 12:20:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:20.091 12:20:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.091 12:20:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.091 12:20:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.091 12:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.091 12:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.091 12:20:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:20.091 12:20:13 -- target/tls.sh@18 -- # rm -f /tmp/tmp.hBQOkHOwqY /tmp/tmp.hwc2xxtW4F /tmp/tmp.xj7XJu2Fhn 00:23:20.091 00:23:20.091 real 1m27.541s 00:23:20.091 user 2m20.169s 00:23:20.091 sys 0m27.284s 00:23:20.091 ************************************ 00:23:20.091 END TEST nvmf_tls 00:23:20.091 ************************************ 00:23:20.091 12:20:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.091 12:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:20.091 12:20:13 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:20.091 12:20:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:20.091 12:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.091 12:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:20.091 ************************************ 00:23:20.091 START TEST nvmf_fips 00:23:20.091 ************************************ 00:23:20.091 12:20:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:20.349 * Looking for test storage... 00:23:20.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:20.349 12:20:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.349 12:20:13 -- nvmf/common.sh@7 -- # uname -s 00:23:20.349 12:20:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.349 12:20:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.349 12:20:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.349 12:20:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.349 12:20:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.349 12:20:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.349 12:20:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.350 12:20:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.350 12:20:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.350 12:20:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.350 12:20:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:20.350 12:20:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:20.350 12:20:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.350 12:20:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.350 12:20:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.350 12:20:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.350 12:20:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.350 12:20:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.350 12:20:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.350 12:20:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.350 12:20:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.350 12:20:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.350 12:20:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.350 12:20:13 -- paths/export.sh@5 -- # export PATH 00:23:20.350 12:20:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.350 12:20:13 -- nvmf/common.sh@47 -- # : 0 00:23:20.350 12:20:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:20.350 12:20:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:20.350 12:20:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.350 12:20:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.350 12:20:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.350 12:20:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:20.350 12:20:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:20.350 12:20:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:20.350 12:20:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.350 12:20:13 -- fips/fips.sh@89 -- # check_openssl_version 00:23:20.350 12:20:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:23:20.350 12:20:13 -- fips/fips.sh@85 -- # openssl version 00:23:20.350 12:20:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:20.350 12:20:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:20.350 12:20:13 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:20.350 12:20:13 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:20.350 12:20:13 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:20.350 12:20:13 -- scripts/common.sh@333 -- # IFS=.-: 00:23:20.350 12:20:13 -- scripts/common.sh@333 -- # read -ra ver1 00:23:20.350 12:20:13 -- scripts/common.sh@334 -- # IFS=.-: 00:23:20.350 12:20:13 -- scripts/common.sh@334 -- # read -ra ver2 00:23:20.350 12:20:13 -- scripts/common.sh@335 -- # local 'op=>=' 00:23:20.350 12:20:13 -- scripts/common.sh@337 -- # ver1_l=3 00:23:20.350 12:20:13 -- scripts/common.sh@338 -- # ver2_l=3 00:23:20.350 12:20:13 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:20.350 12:20:13 -- scripts/common.sh@341 -- # case "$op" in 00:23:20.350 12:20:13 -- scripts/common.sh@345 -- # : 1 00:23:20.350 12:20:13 -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:20.350 12:20:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.350 12:20:13 -- scripts/common.sh@362 -- # decimal 3 00:23:20.350 12:20:13 -- scripts/common.sh@350 -- # local d=3 00:23:20.350 12:20:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:20.350 12:20:13 -- scripts/common.sh@352 -- # echo 3 00:23:20.350 12:20:13 -- scripts/common.sh@362 -- # ver1[v]=3 00:23:20.350 12:20:13 -- scripts/common.sh@363 -- # decimal 3 00:23:20.350 12:20:13 -- scripts/common.sh@350 -- # local d=3 00:23:20.350 12:20:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:20.350 12:20:13 -- scripts/common.sh@352 -- # echo 3 00:23:20.350 12:20:13 -- scripts/common.sh@363 -- # ver2[v]=3 00:23:20.350 12:20:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:20.350 12:20:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:20.350 12:20:13 -- scripts/common.sh@361 -- # (( v++ )) 00:23:20.350 12:20:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.350 12:20:13 -- scripts/common.sh@362 -- # decimal 0 00:23:20.350 12:20:13 -- scripts/common.sh@350 -- # local d=0 00:23:20.350 12:20:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:20.350 12:20:13 -- scripts/common.sh@352 -- # echo 0 00:23:20.350 12:20:13 -- scripts/common.sh@362 -- # ver1[v]=0 00:23:20.350 12:20:13 -- scripts/common.sh@363 -- # decimal 0 00:23:20.350 12:20:13 -- scripts/common.sh@350 -- # local d=0 00:23:20.350 12:20:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:20.350 12:20:13 -- scripts/common.sh@352 -- # echo 0 00:23:20.350 12:20:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:23:20.350 12:20:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:20.350 12:20:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:20.350 12:20:13 -- scripts/common.sh@361 -- # (( v++ )) 00:23:20.350 12:20:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.350 12:20:13 -- scripts/common.sh@362 -- # decimal 9 00:23:20.350 12:20:13 -- scripts/common.sh@350 -- # local d=9 00:23:20.350 12:20:13 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:20.350 12:20:13 -- scripts/common.sh@352 -- # echo 9 00:23:20.350 12:20:13 -- scripts/common.sh@362 -- # ver1[v]=9 00:23:20.350 12:20:13 -- scripts/common.sh@363 -- # decimal 0 00:23:20.350 12:20:13 -- scripts/common.sh@350 -- # local d=0 00:23:20.350 12:20:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:20.350 12:20:13 -- scripts/common.sh@352 -- # echo 0 00:23:20.350 12:20:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:23:20.350 12:20:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:20.350 12:20:13 -- scripts/common.sh@364 -- # return 0 00:23:20.350 12:20:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:20.350 12:20:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:20.350 12:20:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:20.350 12:20:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:20.350 12:20:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:20.350 12:20:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:20.350 12:20:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:20.350 12:20:13 -- fips/fips.sh@113 -- # build_openssl_config 00:23:20.350 12:20:13 -- fips/fips.sh@37 -- # cat 00:23:20.350 12:20:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:20.350 12:20:13 -- fips/fips.sh@58 -- # cat - 00:23:20.350 12:20:13 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:20.350 12:20:13 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:20.350 12:20:13 -- fips/fips.sh@116 -- # mapfile -t providers 00:23:20.350 12:20:13 -- fips/fips.sh@116 -- # openssl list -providers 00:23:20.350 12:20:13 -- fips/fips.sh@116 -- # grep name 00:23:20.350 12:20:13 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:20.350 12:20:13 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:20.350 12:20:13 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:20.350 12:20:13 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:20.350 12:20:13 -- fips/fips.sh@127 -- # : 00:23:20.350 12:20:13 -- common/autotest_common.sh@638 -- # local es=0 00:23:20.350 12:20:13 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:20.350 12:20:13 -- common/autotest_common.sh@626 -- # local arg=openssl 00:23:20.350 12:20:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:20.350 12:20:13 -- common/autotest_common.sh@630 -- # type -t openssl 00:23:20.350 12:20:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:20.350 12:20:13 -- common/autotest_common.sh@632 -- # type -P openssl 00:23:20.350 12:20:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:20.350 12:20:13 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:23:20.350 12:20:13 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:23:20.350 12:20:13 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:23:20.609 Error setting digest 00:23:20.609 00023C0E5D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:20.609 00023C0E5D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:20.609 12:20:13 -- common/autotest_common.sh@641 -- # es=1 00:23:20.609 12:20:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:20.609 12:20:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:20.609 12:20:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:20.609 12:20:13 -- fips/fips.sh@130 -- # nvmftestinit 00:23:20.609 12:20:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:20.609 12:20:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.609 12:20:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:20.609 12:20:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:20.609 12:20:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:20.609 12:20:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.609 12:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.609 12:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.609 12:20:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:20.609 12:20:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:20.609 12:20:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:20.609 12:20:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:20.609 12:20:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:20.609 12:20:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:20.609 12:20:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.609 12:20:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.609 12:20:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:20.609 12:20:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:20.609 12:20:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:20.609 12:20:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:20.609 12:20:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:20.609 12:20:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.609 12:20:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:20.609 12:20:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:20.609 12:20:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:20.609 12:20:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:20.609 12:20:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:20.609 12:20:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:20.609 Cannot find device "nvmf_tgt_br" 00:23:20.609 12:20:13 -- nvmf/common.sh@155 -- # true 00:23:20.609 12:20:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.609 Cannot find device "nvmf_tgt_br2" 00:23:20.609 12:20:13 -- nvmf/common.sh@156 -- # true 00:23:20.609 12:20:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:20.609 12:20:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:20.609 Cannot find device "nvmf_tgt_br" 00:23:20.609 12:20:13 -- nvmf/common.sh@158 -- # true 00:23:20.609 12:20:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:20.609 Cannot find device "nvmf_tgt_br2" 00:23:20.609 12:20:13 -- nvmf/common.sh@159 -- # true 00:23:20.609 12:20:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:20.609 12:20:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:20.609 12:20:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.610 12:20:13 -- nvmf/common.sh@162 -- # true 00:23:20.610 12:20:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.610 12:20:13 -- nvmf/common.sh@163 -- # true 00:23:20.610 12:20:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:20.610 12:20:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:20.610 12:20:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:20.610 12:20:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:20.610 12:20:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:20.610 12:20:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:20.610 12:20:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:20.610 12:20:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:20.610 12:20:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:20.610 12:20:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:20.868 12:20:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:20.868 12:20:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:20.868 12:20:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:20.868 12:20:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.868 12:20:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.868 12:20:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.868 12:20:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:20.868 12:20:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:20.868 12:20:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.868 12:20:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:20.868 12:20:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:20.868 12:20:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:20.868 12:20:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:20.868 12:20:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:20.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:20.868 00:23:20.868 --- 10.0.0.2 ping statistics --- 00:23:20.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.868 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:20.868 12:20:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:20.868 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:20.868 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:23:20.868 00:23:20.868 --- 10.0.0.3 ping statistics --- 00:23:20.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.868 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:20.868 12:20:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:20.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:20.868 00:23:20.868 --- 10.0.0.1 ping statistics --- 00:23:20.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.868 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:20.868 12:20:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.868 12:20:14 -- nvmf/common.sh@422 -- # return 0 00:23:20.868 12:20:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:20.868 12:20:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.868 12:20:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:20.868 12:20:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:20.868 12:20:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.868 12:20:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:20.868 12:20:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:20.868 12:20:14 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:20.868 12:20:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:20.868 12:20:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:20.868 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:20.868 12:20:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.868 12:20:14 -- nvmf/common.sh@470 -- # nvmfpid=71342 00:23:20.868 12:20:14 -- nvmf/common.sh@471 -- # waitforlisten 71342 00:23:20.868 12:20:14 -- common/autotest_common.sh@817 -- # '[' -z 71342 ']' 00:23:20.868 12:20:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.868 12:20:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:20.868 12:20:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.868 12:20:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:20.868 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:20.868 [2024-04-26 12:20:14.306659] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:20.869 [2024-04-26 12:20:14.306774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.127 [2024-04-26 12:20:14.454134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.127 [2024-04-26 12:20:14.581880] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.127 [2024-04-26 12:20:14.581926] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.127 [2024-04-26 12:20:14.581946] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.127 [2024-04-26 12:20:14.581957] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.127 [2024-04-26 12:20:14.581967] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.127 [2024-04-26 12:20:14.582006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.062 12:20:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:22.062 12:20:15 -- common/autotest_common.sh@850 -- # return 0 00:23:22.062 12:20:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:22.062 12:20:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:22.062 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.062 12:20:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.062 12:20:15 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:22.062 12:20:15 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:22.062 12:20:15 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:22.062 12:20:15 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:22.062 12:20:15 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:22.062 12:20:15 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:22.062 12:20:15 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:22.062 12:20:15 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.321 [2024-04-26 12:20:15.663353] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.321 [2024-04-26 12:20:15.679308] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.321 [2024-04-26 12:20:15.679540] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.321 [2024-04-26 12:20:15.710598] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:22.321 malloc0 00:23:22.321 12:20:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.321 12:20:15 -- fips/fips.sh@147 -- # bdevperf_pid=71380 00:23:22.321 12:20:15 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.321 12:20:15 -- fips/fips.sh@148 -- # waitforlisten 71380 /var/tmp/bdevperf.sock 00:23:22.321 12:20:15 -- common/autotest_common.sh@817 -- # '[' -z 71380 ']' 00:23:22.321 12:20:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.321 12:20:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:22.321 12:20:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.321 12:20:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:22.321 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.580 [2024-04-26 12:20:15.814303] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:22.580 [2024-04-26 12:20:15.814401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71380 ] 00:23:22.580 [2024-04-26 12:20:15.955674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.839 [2024-04-26 12:20:16.084948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.404 12:20:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:23.404 12:20:16 -- common/autotest_common.sh@850 -- # return 0 00:23:23.404 12:20:16 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:23.662 [2024-04-26 12:20:17.021212] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.662 [2024-04-26 12:20:17.021341] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:23.662 TLSTESTn1 00:23:23.662 12:20:17 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.920 Running I/O for 10 seconds... 00:23:33.898 00:23:33.898 Latency(us) 00:23:33.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.898 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.898 Verification LBA range: start 0x0 length 0x2000 00:23:33.898 TLSTESTn1 : 10.02 3643.70 14.23 0.00 0.00 35047.72 5183.30 33602.09 00:23:33.898 =================================================================================================================== 00:23:33.898 Total : 3643.70 14.23 0.00 0.00 35047.72 5183.30 33602.09 00:23:33.898 0 00:23:33.898 12:20:27 -- fips/fips.sh@1 -- # cleanup 00:23:33.898 12:20:27 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:33.898 12:20:27 -- common/autotest_common.sh@794 -- # type=--id 00:23:33.898 12:20:27 -- common/autotest_common.sh@795 -- # id=0 00:23:33.898 12:20:27 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:33.898 12:20:27 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:33.898 12:20:27 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:33.898 12:20:27 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:33.898 12:20:27 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:33.898 12:20:27 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:33.898 nvmf_trace.0 00:23:33.898 12:20:27 -- common/autotest_common.sh@809 -- # return 0 00:23:33.898 12:20:27 -- fips/fips.sh@16 -- # killprocess 71380 00:23:33.898 12:20:27 -- common/autotest_common.sh@936 -- # '[' -z 71380 ']' 00:23:33.898 12:20:27 -- common/autotest_common.sh@940 -- # kill -0 71380 00:23:33.898 12:20:27 -- common/autotest_common.sh@941 -- # uname 00:23:33.898 12:20:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.898 12:20:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71380 00:23:33.898 killing process with pid 71380 00:23:33.898 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.898 00:23:33.898 Latency(us) 00:23:33.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.898 =================================================================================================================== 00:23:33.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.898 12:20:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:33.898 12:20:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:33.898 12:20:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71380' 00:23:33.898 12:20:27 -- common/autotest_common.sh@955 -- # kill 71380 00:23:33.898 [2024-04-26 12:20:27.357834] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:33.898 12:20:27 -- common/autotest_common.sh@960 -- # wait 71380 00:23:34.155 12:20:27 -- fips/fips.sh@17 -- # nvmftestfini 00:23:34.155 12:20:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:34.155 12:20:27 -- nvmf/common.sh@117 -- # sync 00:23:34.413 12:20:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.413 12:20:27 -- nvmf/common.sh@120 -- # set +e 00:23:34.413 12:20:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.413 12:20:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.413 rmmod nvme_tcp 00:23:34.413 rmmod nvme_fabrics 00:23:34.413 rmmod nvme_keyring 00:23:34.413 12:20:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.413 12:20:27 -- nvmf/common.sh@124 -- # set -e 00:23:34.413 12:20:27 -- nvmf/common.sh@125 -- # return 0 00:23:34.413 12:20:27 -- nvmf/common.sh@478 -- # '[' -n 71342 ']' 00:23:34.413 12:20:27 -- nvmf/common.sh@479 -- # killprocess 71342 00:23:34.413 12:20:27 -- common/autotest_common.sh@936 -- # '[' -z 71342 ']' 00:23:34.413 12:20:27 -- common/autotest_common.sh@940 -- # kill -0 71342 00:23:34.413 12:20:27 -- common/autotest_common.sh@941 -- # uname 00:23:34.413 12:20:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:34.413 12:20:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71342 00:23:34.413 killing process with pid 71342 00:23:34.413 12:20:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:34.413 12:20:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:34.413 12:20:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71342' 00:23:34.413 12:20:27 -- common/autotest_common.sh@955 -- # kill 71342 00:23:34.413 [2024-04-26 12:20:27.730423] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:34.413 12:20:27 -- common/autotest_common.sh@960 -- # wait 71342 00:23:34.671 12:20:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:34.671 12:20:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:34.671 12:20:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:34.671 12:20:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.671 12:20:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.671 12:20:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.671 12:20:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.671 12:20:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.671 12:20:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:34.671 12:20:28 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:34.671 00:23:34.671 real 0m14.501s 00:23:34.671 user 0m19.796s 00:23:34.671 sys 0m5.736s 00:23:34.671 12:20:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:34.671 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:34.671 ************************************ 00:23:34.671 END TEST nvmf_fips 00:23:34.671 ************************************ 00:23:34.671 12:20:28 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:23:34.671 12:20:28 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:23:34.671 12:20:28 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:23:34.671 12:20:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:34.671 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:34.671 12:20:28 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:23:34.671 12:20:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:34.671 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:34.671 12:20:28 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:23:34.671 12:20:28 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:34.671 12:20:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:34.671 12:20:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:34.671 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:34.929 ************************************ 00:23:34.929 START TEST nvmf_identify 00:23:34.929 ************************************ 00:23:34.929 12:20:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:34.929 * Looking for test storage... 00:23:34.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:34.929 12:20:28 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:34.929 12:20:28 -- nvmf/common.sh@7 -- # uname -s 00:23:34.929 12:20:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.929 12:20:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.929 12:20:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.929 12:20:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.929 12:20:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.929 12:20:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.929 12:20:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.929 12:20:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.929 12:20:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.929 12:20:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.929 12:20:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:34.929 12:20:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:34.929 12:20:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.929 12:20:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.929 12:20:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:34.929 12:20:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.929 12:20:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:34.929 12:20:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.929 12:20:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.929 12:20:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.929 12:20:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.929 12:20:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.929 12:20:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.929 12:20:28 -- paths/export.sh@5 -- # export PATH 00:23:34.929 12:20:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.929 12:20:28 -- nvmf/common.sh@47 -- # : 0 00:23:34.929 12:20:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.929 12:20:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.929 12:20:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.929 12:20:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.929 12:20:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.929 12:20:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.929 12:20:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.929 12:20:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.929 12:20:28 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.929 12:20:28 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.929 12:20:28 -- host/identify.sh@14 -- # nvmftestinit 00:23:34.929 12:20:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:34.929 12:20:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.929 12:20:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:34.929 12:20:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:34.929 12:20:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:34.929 12:20:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.929 12:20:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.929 12:20:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.929 12:20:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:34.929 12:20:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:34.929 12:20:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:34.929 12:20:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:34.929 12:20:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:34.929 12:20:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:34.929 12:20:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.929 12:20:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.929 12:20:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:34.929 12:20:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:34.929 12:20:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:34.929 12:20:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:34.929 12:20:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:34.929 12:20:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.929 12:20:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:34.929 12:20:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:34.929 12:20:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:34.929 12:20:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:34.929 12:20:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:34.929 12:20:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:34.929 Cannot find device "nvmf_tgt_br" 00:23:34.929 12:20:28 -- nvmf/common.sh@155 -- # true 00:23:34.930 12:20:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:34.930 Cannot find device "nvmf_tgt_br2" 00:23:34.930 12:20:28 -- nvmf/common.sh@156 -- # true 00:23:34.930 12:20:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:34.930 12:20:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:34.930 Cannot find device "nvmf_tgt_br" 00:23:34.930 12:20:28 -- nvmf/common.sh@158 -- # true 00:23:34.930 12:20:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:34.930 Cannot find device "nvmf_tgt_br2" 00:23:34.930 12:20:28 -- nvmf/common.sh@159 -- # true 00:23:34.930 12:20:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:35.188 12:20:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:35.188 12:20:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:35.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.188 12:20:28 -- nvmf/common.sh@162 -- # true 00:23:35.188 12:20:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:35.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.188 12:20:28 -- nvmf/common.sh@163 -- # true 00:23:35.188 12:20:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:35.188 12:20:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:35.188 12:20:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:35.188 12:20:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:35.188 12:20:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:35.188 12:20:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:35.188 12:20:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:35.188 12:20:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:35.188 12:20:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:35.188 12:20:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:35.188 12:20:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:35.188 12:20:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:35.188 12:20:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:35.188 12:20:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:35.188 12:20:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:35.188 12:20:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:35.188 12:20:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:35.188 12:20:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:35.188 12:20:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:35.188 12:20:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:35.188 12:20:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:35.188 12:20:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:35.188 12:20:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:35.188 12:20:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:35.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:23:35.188 00:23:35.188 --- 10.0.0.2 ping statistics --- 00:23:35.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.188 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:35.188 12:20:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:35.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:35.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:35.188 00:23:35.188 --- 10.0.0.3 ping statistics --- 00:23:35.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.188 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:35.188 12:20:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:35.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:23:35.188 00:23:35.188 --- 10.0.0.1 ping statistics --- 00:23:35.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.188 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:23:35.188 12:20:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.188 12:20:28 -- nvmf/common.sh@422 -- # return 0 00:23:35.188 12:20:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:35.188 12:20:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.188 12:20:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:35.188 12:20:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:35.188 12:20:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.188 12:20:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:35.188 12:20:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:35.188 12:20:28 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:35.188 12:20:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:35.188 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:35.188 12:20:28 -- host/identify.sh@19 -- # nvmfpid=71734 00:23:35.188 12:20:28 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.188 12:20:28 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.188 12:20:28 -- host/identify.sh@23 -- # waitforlisten 71734 00:23:35.188 12:20:28 -- common/autotest_common.sh@817 -- # '[' -z 71734 ']' 00:23:35.188 12:20:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.188 12:20:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:35.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.188 12:20:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.188 12:20:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:35.188 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:35.446 [2024-04-26 12:20:28.682333] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:35.446 [2024-04-26 12:20:28.682447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.446 [2024-04-26 12:20:28.824489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.704 [2024-04-26 12:20:28.944166] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.704 [2024-04-26 12:20:28.944246] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.704 [2024-04-26 12:20:28.944258] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.704 [2024-04-26 12:20:28.944266] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.704 [2024-04-26 12:20:28.944274] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.704 [2024-04-26 12:20:28.944418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.704 [2024-04-26 12:20:28.944527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.704 [2024-04-26 12:20:28.944636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.704 [2024-04-26 12:20:28.944641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.270 12:20:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:36.270 12:20:29 -- common/autotest_common.sh@850 -- # return 0 00:23:36.270 12:20:29 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.270 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.270 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.270 [2024-04-26 12:20:29.706417] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.270 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.270 12:20:29 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:36.270 12:20:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:36.270 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 12:20:29 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:36.529 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.529 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 Malloc0 00:23:36.529 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.529 12:20:29 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.529 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.529 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.529 12:20:29 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:36.529 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.529 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.529 12:20:29 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.529 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.529 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 [2024-04-26 12:20:29.806358] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.529 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.529 12:20:29 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:36.529 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.529 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.529 12:20:29 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:36.529 12:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.529 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.529 [2024-04-26 12:20:29.822124] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:36.529 [ 00:23:36.529 { 00:23:36.529 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:36.529 "subtype": "Discovery", 00:23:36.529 "listen_addresses": [ 00:23:36.529 { 00:23:36.529 "transport": "TCP", 00:23:36.529 "trtype": "TCP", 00:23:36.529 "adrfam": "IPv4", 00:23:36.529 "traddr": "10.0.0.2", 00:23:36.529 "trsvcid": "4420" 00:23:36.529 } 00:23:36.529 ], 00:23:36.529 "allow_any_host": true, 00:23:36.529 "hosts": [] 00:23:36.529 }, 00:23:36.529 { 00:23:36.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.529 "subtype": "NVMe", 00:23:36.529 "listen_addresses": [ 00:23:36.529 { 00:23:36.529 "transport": "TCP", 00:23:36.529 "trtype": "TCP", 00:23:36.529 "adrfam": "IPv4", 00:23:36.529 "traddr": "10.0.0.2", 00:23:36.529 "trsvcid": "4420" 00:23:36.529 } 00:23:36.529 ], 00:23:36.529 "allow_any_host": true, 00:23:36.529 "hosts": [], 00:23:36.529 "serial_number": "SPDK00000000000001", 00:23:36.529 "model_number": "SPDK bdev Controller", 00:23:36.529 "max_namespaces": 32, 00:23:36.529 "min_cntlid": 1, 00:23:36.529 "max_cntlid": 65519, 00:23:36.529 "namespaces": [ 00:23:36.529 { 00:23:36.529 "nsid": 1, 00:23:36.529 "bdev_name": "Malloc0", 00:23:36.529 "name": "Malloc0", 00:23:36.529 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:36.529 "eui64": "ABCDEF0123456789", 00:23:36.529 "uuid": "ee009624-b220-442a-b2f0-3368ba9626b4" 00:23:36.529 } 00:23:36.529 ] 00:23:36.529 } 00:23:36.529 ] 00:23:36.529 12:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.529 12:20:29 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:36.529 [2024-04-26 12:20:29.857995] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:36.529 [2024-04-26 12:20:29.858050] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71769 ] 00:23:36.792 [2024-04-26 12:20:29.996693] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:36.792 [2024-04-26 12:20:29.996783] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:36.792 [2024-04-26 12:20:29.996791] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:36.792 [2024-04-26 12:20:29.996806] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:36.792 [2024-04-26 12:20:29.996823] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:23:36.792 [2024-04-26 12:20:29.997003] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:36.792 [2024-04-26 12:20:29.997060] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1df0360 0 00:23:36.792 [2024-04-26 12:20:30.009201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:36.792 [2024-04-26 12:20:30.009238] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:36.792 [2024-04-26 12:20:30.009245] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:36.792 [2024-04-26 12:20:30.009249] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:36.792 [2024-04-26 12:20:30.009303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.009312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.009317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.009333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:36.792 [2024-04-26 12:20:30.009367] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.792 [2024-04-26 12:20:30.017209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.792 [2024-04-26 12:20:30.017247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.792 [2024-04-26 12:20:30.017254] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017260] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.792 [2024-04-26 12:20:30.017279] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:36.792 [2024-04-26 12:20:30.017292] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:36.792 [2024-04-26 12:20:30.017299] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:36.792 [2024-04-26 12:20:30.017320] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017327] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017331] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.017346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.792 [2024-04-26 12:20:30.017390] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.792 [2024-04-26 12:20:30.017459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.792 [2024-04-26 12:20:30.017467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.792 [2024-04-26 12:20:30.017471] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017475] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.792 [2024-04-26 12:20:30.017487] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:36.792 [2024-04-26 12:20:30.017496] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:36.792 [2024-04-26 12:20:30.017505] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017509] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017513] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.017522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.792 [2024-04-26 12:20:30.017542] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.792 [2024-04-26 12:20:30.017593] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.792 [2024-04-26 12:20:30.017600] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.792 [2024-04-26 12:20:30.017604] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017609] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.792 [2024-04-26 12:20:30.017616] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:36.792 [2024-04-26 12:20:30.017626] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:36.792 [2024-04-26 12:20:30.017634] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017638] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017643] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.017651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.792 [2024-04-26 12:20:30.017669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.792 [2024-04-26 12:20:30.017715] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.792 [2024-04-26 12:20:30.017722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.792 [2024-04-26 12:20:30.017726] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017730] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.792 [2024-04-26 12:20:30.017737] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:36.792 [2024-04-26 12:20:30.017748] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017753] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017757] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.017765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.792 [2024-04-26 12:20:30.017782] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.792 [2024-04-26 12:20:30.017829] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.792 [2024-04-26 12:20:30.017836] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.792 [2024-04-26 12:20:30.017840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017844] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.792 [2024-04-26 12:20:30.017850] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:36.792 [2024-04-26 12:20:30.017856] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:36.792 [2024-04-26 12:20:30.017864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:36.792 [2024-04-26 12:20:30.017970] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:36.792 [2024-04-26 12:20:30.017976] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:36.792 [2024-04-26 12:20:30.017986] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017991] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.017995] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.018002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.792 [2024-04-26 12:20:30.018021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.792 [2024-04-26 12:20:30.018081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.792 [2024-04-26 12:20:30.018088] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.792 [2024-04-26 12:20:30.018092] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.018097] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.792 [2024-04-26 12:20:30.018103] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:36.792 [2024-04-26 12:20:30.018114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.018118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.792 [2024-04-26 12:20:30.018123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.792 [2024-04-26 12:20:30.018130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.793 [2024-04-26 12:20:30.018148] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.793 [2024-04-26 12:20:30.018210] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.793 [2024-04-26 12:20:30.018219] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.793 [2024-04-26 12:20:30.018223] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018227] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.793 [2024-04-26 12:20:30.018234] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:36.793 [2024-04-26 12:20:30.018240] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:36.793 [2024-04-26 12:20:30.018248] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:36.793 [2024-04-26 12:20:30.018259] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:36.793 [2024-04-26 12:20:30.018274] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018278] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.793 [2024-04-26 12:20:30.018309] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.793 [2024-04-26 12:20:30.018409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.793 [2024-04-26 12:20:30.018417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.793 [2024-04-26 12:20:30.018421] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018426] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df0360): datao=0, datal=4096, cccid=0 00:23:36.793 [2024-04-26 12:20:30.018431] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e38a20) on tqpair(0x1df0360): expected_datao=0, payload_size=4096 00:23:36.793 [2024-04-26 12:20:30.018437] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018446] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018452] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018461] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.793 [2024-04-26 12:20:30.018467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.793 [2024-04-26 12:20:30.018471] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018475] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.793 [2024-04-26 12:20:30.018487] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:36.793 [2024-04-26 12:20:30.018493] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:36.793 [2024-04-26 12:20:30.018499] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:36.793 [2024-04-26 12:20:30.018510] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:36.793 [2024-04-26 12:20:30.018515] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:36.793 [2024-04-26 12:20:30.018521] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:36.793 [2024-04-26 12:20:30.018531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:36.793 [2024-04-26 12:20:30.018540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018549] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.793 [2024-04-26 12:20:30.018578] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.793 [2024-04-26 12:20:30.018640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.793 [2024-04-26 12:20:30.018647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.793 [2024-04-26 12:20:30.018652] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018656] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38a20) on tqpair=0x1df0360 00:23:36.793 [2024-04-26 12:20:30.018666] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.793 [2024-04-26 12:20:30.018689] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.793 [2024-04-26 12:20:30.018712] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018716] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018720] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.793 [2024-04-26 12:20:30.018734] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018738] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018742] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.793 [2024-04-26 12:20:30.018754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:36.793 [2024-04-26 12:20:30.018768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:36.793 [2024-04-26 12:20:30.018777] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018782] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.793 [2024-04-26 12:20:30.018810] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38a20, cid 0, qid 0 00:23:36.793 [2024-04-26 12:20:30.018817] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38b80, cid 1, qid 0 00:23:36.793 [2024-04-26 12:20:30.018822] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38ce0, cid 2, qid 0 00:23:36.793 [2024-04-26 12:20:30.018827] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.793 [2024-04-26 12:20:30.018832] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38fa0, cid 4, qid 0 00:23:36.793 [2024-04-26 12:20:30.018923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.793 [2024-04-26 12:20:30.018930] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.793 [2024-04-26 12:20:30.018934] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018939] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38fa0) on tqpair=0x1df0360 00:23:36.793 [2024-04-26 12:20:30.018946] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:36.793 [2024-04-26 12:20:30.018952] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:36.793 [2024-04-26 12:20:30.018965] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.018970] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.018978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.793 [2024-04-26 12:20:30.018997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38fa0, cid 4, qid 0 00:23:36.793 [2024-04-26 12:20:30.019058] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.793 [2024-04-26 12:20:30.019065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.793 [2024-04-26 12:20:30.019069] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019073] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df0360): datao=0, datal=4096, cccid=4 00:23:36.793 [2024-04-26 12:20:30.019078] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e38fa0) on tqpair(0x1df0360): expected_datao=0, payload_size=4096 00:23:36.793 [2024-04-26 12:20:30.019083] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019091] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019095] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.793 [2024-04-26 12:20:30.019110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.793 [2024-04-26 12:20:30.019114] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019118] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38fa0) on tqpair=0x1df0360 00:23:36.793 [2024-04-26 12:20:30.019135] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:36.793 [2024-04-26 12:20:30.019160] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019166] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.019186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.793 [2024-04-26 12:20:30.019197] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019201] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.793 [2024-04-26 12:20:30.019205] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df0360) 00:23:36.793 [2024-04-26 12:20:30.019212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.793 [2024-04-26 12:20:30.019243] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38fa0, cid 4, qid 0 00:23:36.794 [2024-04-26 12:20:30.019251] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e39100, cid 5, qid 0 00:23:36.794 [2024-04-26 12:20:30.019384] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.794 [2024-04-26 12:20:30.019401] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.794 [2024-04-26 12:20:30.019406] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019410] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df0360): datao=0, datal=1024, cccid=4 00:23:36.794 [2024-04-26 12:20:30.019416] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e38fa0) on tqpair(0x1df0360): expected_datao=0, payload_size=1024 00:23:36.794 [2024-04-26 12:20:30.019421] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019428] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019433] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.794 [2024-04-26 12:20:30.019445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.794 [2024-04-26 12:20:30.019449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019454] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e39100) on tqpair=0x1df0360 00:23:36.794 [2024-04-26 12:20:30.019476] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.794 [2024-04-26 12:20:30.019484] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.794 [2024-04-26 12:20:30.019488] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019493] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38fa0) on tqpair=0x1df0360 00:23:36.794 [2024-04-26 12:20:30.019514] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019520] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df0360) 00:23:36.794 [2024-04-26 12:20:30.019528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.794 [2024-04-26 12:20:30.019555] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38fa0, cid 4, qid 0 00:23:36.794 [2024-04-26 12:20:30.019625] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.794 [2024-04-26 12:20:30.019633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.794 [2024-04-26 12:20:30.019637] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019642] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df0360): datao=0, datal=3072, cccid=4 00:23:36.794 [2024-04-26 12:20:30.019647] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e38fa0) on tqpair(0x1df0360): expected_datao=0, payload_size=3072 00:23:36.794 [2024-04-26 12:20:30.019652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019659] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019664] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019672] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.794 [2024-04-26 12:20:30.019679] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.794 [2024-04-26 12:20:30.019683] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019687] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38fa0) on tqpair=0x1df0360 00:23:36.794 [2024-04-26 12:20:30.019699] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df0360) 00:23:36.794 [2024-04-26 12:20:30.019712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.794 [2024-04-26 12:20:30.019736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38fa0, cid 4, qid 0 00:23:36.794 [2024-04-26 12:20:30.019810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.794 [2024-04-26 12:20:30.019828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.794 [2024-04-26 12:20:30.019833] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019837] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df0360): datao=0, datal=8, cccid=4 00:23:36.794 [2024-04-26 12:20:30.019842] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e38fa0) on tqpair(0x1df0360): expected_datao=0, payload_size=8 00:23:36.794 [2024-04-26 12:20:30.019847] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019855] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019859] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.794 [2024-04-26 12:20:30.019884] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.794 [2024-04-26 12:20:30.019888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.794 [2024-04-26 12:20:30.019893] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38fa0) on tqpair=0x1df0360 00:23:36.794 ===================================================== 00:23:36.794 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:36.794 ===================================================== 00:23:36.794 Controller Capabilities/Features 00:23:36.794 ================================ 00:23:36.794 Vendor ID: 0000 00:23:36.794 Subsystem Vendor ID: 0000 00:23:36.794 Serial Number: .................... 00:23:36.794 Model Number: ........................................ 00:23:36.794 Firmware Version: 24.05 00:23:36.794 Recommended Arb Burst: 0 00:23:36.794 IEEE OUI Identifier: 00 00 00 00:23:36.794 Multi-path I/O 00:23:36.794 May have multiple subsystem ports: No 00:23:36.794 May have multiple controllers: No 00:23:36.794 Associated with SR-IOV VF: No 00:23:36.794 Max Data Transfer Size: 131072 00:23:36.794 Max Number of Namespaces: 0 00:23:36.794 Max Number of I/O Queues: 1024 00:23:36.794 NVMe Specification Version (VS): 1.3 00:23:36.794 NVMe Specification Version (Identify): 1.3 00:23:36.794 Maximum Queue Entries: 128 00:23:36.794 Contiguous Queues Required: Yes 00:23:36.794 Arbitration Mechanisms Supported 00:23:36.794 Weighted Round Robin: Not Supported 00:23:36.794 Vendor Specific: Not Supported 00:23:36.794 Reset Timeout: 15000 ms 00:23:36.794 Doorbell Stride: 4 bytes 00:23:36.794 NVM Subsystem Reset: Not Supported 00:23:36.794 Command Sets Supported 00:23:36.794 NVM Command Set: Supported 00:23:36.794 Boot Partition: Not Supported 00:23:36.794 Memory Page Size Minimum: 4096 bytes 00:23:36.794 Memory Page Size Maximum: 4096 bytes 00:23:36.794 Persistent Memory Region: Not Supported 00:23:36.794 Optional Asynchronous Events Supported 00:23:36.794 Namespace Attribute Notices: Not Supported 00:23:36.794 Firmware Activation Notices: Not Supported 00:23:36.794 ANA Change Notices: Not Supported 00:23:36.794 PLE Aggregate Log Change Notices: Not Supported 00:23:36.794 LBA Status Info Alert Notices: Not Supported 00:23:36.794 EGE Aggregate Log Change Notices: Not Supported 00:23:36.794 Normal NVM Subsystem Shutdown event: Not Supported 00:23:36.794 Zone Descriptor Change Notices: Not Supported 00:23:36.794 Discovery Log Change Notices: Supported 00:23:36.794 Controller Attributes 00:23:36.794 128-bit Host Identifier: Not Supported 00:23:36.794 Non-Operational Permissive Mode: Not Supported 00:23:36.794 NVM Sets: Not Supported 00:23:36.794 Read Recovery Levels: Not Supported 00:23:36.794 Endurance Groups: Not Supported 00:23:36.794 Predictable Latency Mode: Not Supported 00:23:36.794 Traffic Based Keep ALive: Not Supported 00:23:36.794 Namespace Granularity: Not Supported 00:23:36.794 SQ Associations: Not Supported 00:23:36.794 UUID List: Not Supported 00:23:36.794 Multi-Domain Subsystem: Not Supported 00:23:36.794 Fixed Capacity Management: Not Supported 00:23:36.794 Variable Capacity Management: Not Supported 00:23:36.794 Delete Endurance Group: Not Supported 00:23:36.794 Delete NVM Set: Not Supported 00:23:36.794 Extended LBA Formats Supported: Not Supported 00:23:36.794 Flexible Data Placement Supported: Not Supported 00:23:36.794 00:23:36.794 Controller Memory Buffer Support 00:23:36.794 ================================ 00:23:36.794 Supported: No 00:23:36.794 00:23:36.794 Persistent Memory Region Support 00:23:36.794 ================================ 00:23:36.794 Supported: No 00:23:36.794 00:23:36.794 Admin Command Set Attributes 00:23:36.794 ============================ 00:23:36.794 Security Send/Receive: Not Supported 00:23:36.794 Format NVM: Not Supported 00:23:36.794 Firmware Activate/Download: Not Supported 00:23:36.794 Namespace Management: Not Supported 00:23:36.794 Device Self-Test: Not Supported 00:23:36.794 Directives: Not Supported 00:23:36.794 NVMe-MI: Not Supported 00:23:36.794 Virtualization Management: Not Supported 00:23:36.794 Doorbell Buffer Config: Not Supported 00:23:36.794 Get LBA Status Capability: Not Supported 00:23:36.794 Command & Feature Lockdown Capability: Not Supported 00:23:36.794 Abort Command Limit: 1 00:23:36.794 Async Event Request Limit: 4 00:23:36.794 Number of Firmware Slots: N/A 00:23:36.794 Firmware Slot 1 Read-Only: N/A 00:23:36.794 Firmware Activation Without Reset: N/A 00:23:36.794 Multiple Update Detection Support: N/A 00:23:36.794 Firmware Update Granularity: No Information Provided 00:23:36.794 Per-Namespace SMART Log: No 00:23:36.794 Asymmetric Namespace Access Log Page: Not Supported 00:23:36.794 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:36.794 Command Effects Log Page: Not Supported 00:23:36.794 Get Log Page Extended Data: Supported 00:23:36.794 Telemetry Log Pages: Not Supported 00:23:36.794 Persistent Event Log Pages: Not Supported 00:23:36.794 Supported Log Pages Log Page: May Support 00:23:36.794 Commands Supported & Effects Log Page: Not Supported 00:23:36.794 Feature Identifiers & Effects Log Page:May Support 00:23:36.794 NVMe-MI Commands & Effects Log Page: May Support 00:23:36.795 Data Area 4 for Telemetry Log: Not Supported 00:23:36.795 Error Log Page Entries Supported: 128 00:23:36.795 Keep Alive: Not Supported 00:23:36.795 00:23:36.795 NVM Command Set Attributes 00:23:36.795 ========================== 00:23:36.795 Submission Queue Entry Size 00:23:36.795 Max: 1 00:23:36.795 Min: 1 00:23:36.795 Completion Queue Entry Size 00:23:36.795 Max: 1 00:23:36.795 Min: 1 00:23:36.795 Number of Namespaces: 0 00:23:36.795 Compare Command: Not Supported 00:23:36.795 Write Uncorrectable Command: Not Supported 00:23:36.795 Dataset Management Command: Not Supported 00:23:36.795 Write Zeroes Command: Not Supported 00:23:36.795 Set Features Save Field: Not Supported 00:23:36.795 Reservations: Not Supported 00:23:36.795 Timestamp: Not Supported 00:23:36.795 Copy: Not Supported 00:23:36.795 Volatile Write Cache: Not Present 00:23:36.795 Atomic Write Unit (Normal): 1 00:23:36.795 Atomic Write Unit (PFail): 1 00:23:36.795 Atomic Compare & Write Unit: 1 00:23:36.795 Fused Compare & Write: Supported 00:23:36.795 Scatter-Gather List 00:23:36.795 SGL Command Set: Supported 00:23:36.795 SGL Keyed: Supported 00:23:36.795 SGL Bit Bucket Descriptor: Not Supported 00:23:36.795 SGL Metadata Pointer: Not Supported 00:23:36.795 Oversized SGL: Not Supported 00:23:36.795 SGL Metadata Address: Not Supported 00:23:36.795 SGL Offset: Supported 00:23:36.795 Transport SGL Data Block: Not Supported 00:23:36.795 Replay Protected Memory Block: Not Supported 00:23:36.795 00:23:36.795 Firmware Slot Information 00:23:36.795 ========================= 00:23:36.795 Active slot: 0 00:23:36.795 00:23:36.795 00:23:36.795 Error Log 00:23:36.795 ========= 00:23:36.795 00:23:36.795 Active Namespaces 00:23:36.795 ================= 00:23:36.795 Discovery Log Page 00:23:36.795 ================== 00:23:36.795 Generation Counter: 2 00:23:36.795 Number of Records: 2 00:23:36.795 Record Format: 0 00:23:36.795 00:23:36.795 Discovery Log Entry 0 00:23:36.795 ---------------------- 00:23:36.795 Transport Type: 3 (TCP) 00:23:36.795 Address Family: 1 (IPv4) 00:23:36.795 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:36.795 Entry Flags: 00:23:36.795 Duplicate Returned Information: 1 00:23:36.795 Explicit Persistent Connection Support for Discovery: 1 00:23:36.795 Transport Requirements: 00:23:36.795 Secure Channel: Not Required 00:23:36.795 Port ID: 0 (0x0000) 00:23:36.795 Controller ID: 65535 (0xffff) 00:23:36.795 Admin Max SQ Size: 128 00:23:36.795 Transport Service Identifier: 4420 00:23:36.795 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:36.795 Transport Address: 10.0.0.2 00:23:36.795 Discovery Log Entry 1 00:23:36.795 ---------------------- 00:23:36.795 Transport Type: 3 (TCP) 00:23:36.795 Address Family: 1 (IPv4) 00:23:36.795 Subsystem Type: 2 (NVM Subsystem) 00:23:36.795 Entry Flags: 00:23:36.795 Duplicate Returned Information: 0 00:23:36.795 Explicit Persistent Connection Support for Discovery: 0 00:23:36.795 Transport Requirements: 00:23:36.795 Secure Channel: Not Required 00:23:36.795 Port ID: 0 (0x0000) 00:23:36.795 Controller ID: 65535 (0xffff) 00:23:36.795 Admin Max SQ Size: 128 00:23:36.795 Transport Service Identifier: 4420 00:23:36.795 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:36.795 Transport Address: 10.0.0.2 [2024-04-26 12:20:30.019994] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:36.795 [2024-04-26 12:20:30.020011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.795 [2024-04-26 12:20:30.020019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.795 [2024-04-26 12:20:30.020026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.795 [2024-04-26 12:20:30.020033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.795 [2024-04-26 12:20:30.020043] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020052] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.795 [2024-04-26 12:20:30.020061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.795 [2024-04-26 12:20:30.020084] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.795 [2024-04-26 12:20:30.020129] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.795 [2024-04-26 12:20:30.020137] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.795 [2024-04-26 12:20:30.020141] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020145] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.795 [2024-04-26 12:20:30.020160] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.795 [2024-04-26 12:20:30.020192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.795 [2024-04-26 12:20:30.020218] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.795 [2024-04-26 12:20:30.020287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.795 [2024-04-26 12:20:30.020294] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.795 [2024-04-26 12:20:30.020298] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020302] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.795 [2024-04-26 12:20:30.020309] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:36.795 [2024-04-26 12:20:30.020315] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:36.795 [2024-04-26 12:20:30.020326] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020331] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020335] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.795 [2024-04-26 12:20:30.020343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.795 [2024-04-26 12:20:30.020361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.795 [2024-04-26 12:20:30.020415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.795 [2024-04-26 12:20:30.020422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.795 [2024-04-26 12:20:30.020425] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020430] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.795 [2024-04-26 12:20:30.020443] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020448] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020452] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.795 [2024-04-26 12:20:30.020460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.795 [2024-04-26 12:20:30.020477] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.795 [2024-04-26 12:20:30.020522] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.795 [2024-04-26 12:20:30.020530] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.795 [2024-04-26 12:20:30.020534] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020539] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.795 [2024-04-26 12:20:30.020551] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020556] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020560] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.795 [2024-04-26 12:20:30.020568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.795 [2024-04-26 12:20:30.020586] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.795 [2024-04-26 12:20:30.020638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.795 [2024-04-26 12:20:30.020645] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.795 [2024-04-26 12:20:30.020649] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020654] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.795 [2024-04-26 12:20:30.020666] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020671] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.795 [2024-04-26 12:20:30.020675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.795 [2024-04-26 12:20:30.020683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.795 [2024-04-26 12:20:30.020701] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.796 [2024-04-26 12:20:30.020750] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.020757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.020761] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020765] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.796 [2024-04-26 12:20:30.020777] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020782] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020786] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.796 [2024-04-26 12:20:30.020794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.796 [2024-04-26 12:20:30.020811] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.796 [2024-04-26 12:20:30.020863] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.020870] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.020874] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020879] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.796 [2024-04-26 12:20:30.020890] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.796 [2024-04-26 12:20:30.020907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.796 [2024-04-26 12:20:30.020924] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.796 [2024-04-26 12:20:30.020977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.020989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.020994] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.020998] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.796 [2024-04-26 12:20:30.021010] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.021016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.021020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.796 [2024-04-26 12:20:30.021028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.796 [2024-04-26 12:20:30.021046] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.796 [2024-04-26 12:20:30.021096] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.021103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.021107] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.021111] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.796 [2024-04-26 12:20:30.021123] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.021129] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.021133] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.796 [2024-04-26 12:20:30.021141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.796 [2024-04-26 12:20:30.021158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.796 [2024-04-26 12:20:30.025191] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.025215] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.025220] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.025226] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.796 [2024-04-26 12:20:30.025246] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.025252] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.025256] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df0360) 00:23:36.796 [2024-04-26 12:20:30.025267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.796 [2024-04-26 12:20:30.025294] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e38e40, cid 3, qid 0 00:23:36.796 [2024-04-26 12:20:30.025348] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.025357] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.025362] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.025367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e38e40) on tqpair=0x1df0360 00:23:36.796 [2024-04-26 12:20:30.025377] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:36.796 00:23:36.796 12:20:30 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:36.796 [2024-04-26 12:20:30.064934] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:36.796 [2024-04-26 12:20:30.064989] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71771 ] 00:23:36.796 [2024-04-26 12:20:30.203375] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:36.796 [2024-04-26 12:20:30.203449] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:36.796 [2024-04-26 12:20:30.203458] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:36.796 [2024-04-26 12:20:30.203472] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:36.796 [2024-04-26 12:20:30.203488] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:23:36.796 [2024-04-26 12:20:30.203641] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:36.796 [2024-04-26 12:20:30.203694] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x236f360 0 00:23:36.796 [2024-04-26 12:20:30.216192] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:36.796 [2024-04-26 12:20:30.216225] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:36.796 [2024-04-26 12:20:30.216231] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:36.796 [2024-04-26 12:20:30.216235] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:36.796 [2024-04-26 12:20:30.216286] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.216294] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.216299] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.796 [2024-04-26 12:20:30.216315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:36.796 [2024-04-26 12:20:30.216349] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.796 [2024-04-26 12:20:30.224197] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.796 [2024-04-26 12:20:30.224221] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.796 [2024-04-26 12:20:30.224227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.224232] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.796 [2024-04-26 12:20:30.224247] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:36.796 [2024-04-26 12:20:30.224257] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:36.796 [2024-04-26 12:20:30.224263] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:36.796 [2024-04-26 12:20:30.224283] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.224289] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.796 [2024-04-26 12:20:30.224293] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.224303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.224330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.224391] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.224399] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.224402] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.224407] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.224418] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:36.797 [2024-04-26 12:20:30.224427] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:36.797 [2024-04-26 12:20:30.224435] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.224440] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.224444] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.224452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.224471] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.224916] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.224932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.224937] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.224941] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.224949] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:36.797 [2024-04-26 12:20:30.224959] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:36.797 [2024-04-26 12:20:30.224967] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.224972] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.224976] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.224984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.225003] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.225052] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.225058] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.225062] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225066] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.225074] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:36.797 [2024-04-26 12:20:30.225084] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225093] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.225101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.225166] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.225281] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.225289] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.225293] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.225304] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:36.797 [2024-04-26 12:20:30.225310] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:36.797 [2024-04-26 12:20:30.225319] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:36.797 [2024-04-26 12:20:30.225425] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:36.797 [2024-04-26 12:20:30.225435] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:36.797 [2024-04-26 12:20:30.225446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225450] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225455] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.225463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.225482] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.225809] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.225824] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.225829] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225833] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.225840] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:36.797 [2024-04-26 12:20:30.225852] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225861] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.225869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.225888] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.225943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.225950] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.225953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.225958] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.225964] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:36.797 [2024-04-26 12:20:30.225969] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:36.797 [2024-04-26 12:20:30.225978] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:36.797 [2024-04-26 12:20:30.225989] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:36.797 [2024-04-26 12:20:30.226001] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.226006] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.797 [2024-04-26 12:20:30.226014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.797 [2024-04-26 12:20:30.226033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.797 [2024-04-26 12:20:30.226581] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.797 [2024-04-26 12:20:30.226596] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.797 [2024-04-26 12:20:30.226602] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.226606] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=4096, cccid=0 00:23:36.797 [2024-04-26 12:20:30.226611] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b7a20) on tqpair(0x236f360): expected_datao=0, payload_size=4096 00:23:36.797 [2024-04-26 12:20:30.226617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.226626] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.226631] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.226641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.797 [2024-04-26 12:20:30.226647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.797 [2024-04-26 12:20:30.226651] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.797 [2024-04-26 12:20:30.226655] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.797 [2024-04-26 12:20:30.226666] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:36.797 [2024-04-26 12:20:30.226672] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:36.797 [2024-04-26 12:20:30.226677] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:36.797 [2024-04-26 12:20:30.226687] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:36.797 [2024-04-26 12:20:30.226693] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:36.797 [2024-04-26 12:20:30.226698] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:36.797 [2024-04-26 12:20:30.226709] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:36.797 [2024-04-26 12:20:30.226718] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226723] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226727] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.226735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.798 [2024-04-26 12:20:30.226758] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.798 [2024-04-26 12:20:30.226902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.798 [2024-04-26 12:20:30.226910] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.798 [2024-04-26 12:20:30.226913] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226918] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7a20) on tqpair=0x236f360 00:23:36.798 [2024-04-26 12:20:30.226928] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.226943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.798 [2024-04-26 12:20:30.226951] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226955] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226959] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.226966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.798 [2024-04-26 12:20:30.226973] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226977] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226981] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.226988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.798 [2024-04-26 12:20:30.226994] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.226999] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227003] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.227009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.798 [2024-04-26 12:20:30.227014] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227028] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227037] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227041] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.227049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.798 [2024-04-26 12:20:30.227069] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7a20, cid 0, qid 0 00:23:36.798 [2024-04-26 12:20:30.227077] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7b80, cid 1, qid 0 00:23:36.798 [2024-04-26 12:20:30.227082] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7ce0, cid 2, qid 0 00:23:36.798 [2024-04-26 12:20:30.227087] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7e40, cid 3, qid 0 00:23:36.798 [2024-04-26 12:20:30.227092] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.798 [2024-04-26 12:20:30.227467] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.798 [2024-04-26 12:20:30.227486] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.798 [2024-04-26 12:20:30.227491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227495] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.798 [2024-04-26 12:20:30.227502] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:36.798 [2024-04-26 12:20:30.227509] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227518] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227526] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227533] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227542] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.227550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.798 [2024-04-26 12:20:30.227573] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.798 [2024-04-26 12:20:30.227624] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.798 [2024-04-26 12:20:30.227631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.798 [2024-04-26 12:20:30.227635] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227639] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.798 [2024-04-26 12:20:30.227692] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227704] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.227713] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227717] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.227725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.798 [2024-04-26 12:20:30.227744] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.798 [2024-04-26 12:20:30.227884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.798 [2024-04-26 12:20:30.227892] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.798 [2024-04-26 12:20:30.227896] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227900] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=4096, cccid=4 00:23:36.798 [2024-04-26 12:20:30.227905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b7fa0) on tqpair(0x236f360): expected_datao=0, payload_size=4096 00:23:36.798 [2024-04-26 12:20:30.227911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227919] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.227923] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232186] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.798 [2024-04-26 12:20:30.232207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.798 [2024-04-26 12:20:30.232213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232218] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.798 [2024-04-26 12:20:30.232232] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:36.798 [2024-04-26 12:20:30.232250] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.232263] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.232273] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232278] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.232287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.798 [2024-04-26 12:20:30.232313] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.798 [2024-04-26 12:20:30.232406] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.798 [2024-04-26 12:20:30.232414] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.798 [2024-04-26 12:20:30.232417] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232421] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=4096, cccid=4 00:23:36.798 [2024-04-26 12:20:30.232426] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b7fa0) on tqpair(0x236f360): expected_datao=0, payload_size=4096 00:23:36.798 [2024-04-26 12:20:30.232432] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232439] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232444] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232452] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.798 [2024-04-26 12:20:30.232459] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.798 [2024-04-26 12:20:30.232462] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232467] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.798 [2024-04-26 12:20:30.232485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.232496] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:36.798 [2024-04-26 12:20:30.232505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.232509] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.798 [2024-04-26 12:20:30.232517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.798 [2024-04-26 12:20:30.232537] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.798 [2024-04-26 12:20:30.232976] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.798 [2024-04-26 12:20:30.232991] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.798 [2024-04-26 12:20:30.232997] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.798 [2024-04-26 12:20:30.233001] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=4096, cccid=4 00:23:36.799 [2024-04-26 12:20:30.233006] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b7fa0) on tqpair(0x236f360): expected_datao=0, payload_size=4096 00:23:36.799 [2024-04-26 12:20:30.233011] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233018] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233023] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.799 [2024-04-26 12:20:30.233038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.799 [2024-04-26 12:20:30.233042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233046] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.799 [2024-04-26 12:20:30.233056] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:36.799 [2024-04-26 12:20:30.233066] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:36.799 [2024-04-26 12:20:30.233077] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:36.799 [2024-04-26 12:20:30.233084] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:36.799 [2024-04-26 12:20:30.233090] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:36.799 [2024-04-26 12:20:30.233096] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:36.799 [2024-04-26 12:20:30.233101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:36.799 [2024-04-26 12:20:30.233107] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:36.799 [2024-04-26 12:20:30.233126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233132] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.233140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.233148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.233162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.799 [2024-04-26 12:20:30.233201] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.799 [2024-04-26 12:20:30.233211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 5, qid 0 00:23:36.799 [2024-04-26 12:20:30.233614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.799 [2024-04-26 12:20:30.233621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.799 [2024-04-26 12:20:30.233625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.799 [2024-04-26 12:20:30.233637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.799 [2024-04-26 12:20:30.233644] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.799 [2024-04-26 12:20:30.233647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x236f360 00:23:36.799 [2024-04-26 12:20:30.233663] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233668] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.233676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.233693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 5, qid 0 00:23:36.799 [2024-04-26 12:20:30.233830] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.799 [2024-04-26 12:20:30.233836] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.799 [2024-04-26 12:20:30.233840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233844] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x236f360 00:23:36.799 [2024-04-26 12:20:30.233856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.233860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.233868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.233884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 5, qid 0 00:23:36.799 [2024-04-26 12:20:30.234024] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.799 [2024-04-26 12:20:30.234031] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.799 [2024-04-26 12:20:30.234035] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234039] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x236f360 00:23:36.799 [2024-04-26 12:20:30.234051] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234055] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.234062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.234078] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 5, qid 0 00:23:36.799 [2024-04-26 12:20:30.234377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.799 [2024-04-26 12:20:30.234387] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.799 [2024-04-26 12:20:30.234390] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234395] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x236f360 00:23:36.799 [2024-04-26 12:20:30.234411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234416] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.234424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.234433] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234437] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.234444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.234453] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234457] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.234463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.234472] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.234477] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x236f360) 00:23:36.799 [2024-04-26 12:20:30.234483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.799 [2024-04-26 12:20:30.234504] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8100, cid 5, qid 0 00:23:36.799 [2024-04-26 12:20:30.234511] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7fa0, cid 4, qid 0 00:23:36.799 [2024-04-26 12:20:30.234516] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b8260, cid 6, qid 0 00:23:36.799 [2024-04-26 12:20:30.234521] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b83c0, cid 7, qid 0 00:23:36.799 [2024-04-26 12:20:30.235168] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.799 [2024-04-26 12:20:30.235194] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.799 [2024-04-26 12:20:30.235199] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235203] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=8192, cccid=5 00:23:36.799 [2024-04-26 12:20:30.235209] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8100) on tqpair(0x236f360): expected_datao=0, payload_size=8192 00:23:36.799 [2024-04-26 12:20:30.235213] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235232] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235238] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235244] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.799 [2024-04-26 12:20:30.235250] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.799 [2024-04-26 12:20:30.235254] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235258] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=512, cccid=4 00:23:36.799 [2024-04-26 12:20:30.235263] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b7fa0) on tqpair(0x236f360): expected_datao=0, payload_size=512 00:23:36.799 [2024-04-26 12:20:30.235267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235274] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235278] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235294] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.799 [2024-04-26 12:20:30.235301] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.799 [2024-04-26 12:20:30.235304] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.799 [2024-04-26 12:20:30.235308] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=512, cccid=6 00:23:36.800 [2024-04-26 12:20:30.235313] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b8260) on tqpair(0x236f360): expected_datao=0, payload_size=512 00:23:36.800 [2024-04-26 12:20:30.235318] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235325] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235329] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.800 [2024-04-26 12:20:30.235341] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.800 [2024-04-26 12:20:30.235344] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235348] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x236f360): datao=0, datal=4096, cccid=7 00:23:36.800 [2024-04-26 12:20:30.235353] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23b83c0) on tqpair(0x236f360): expected_datao=0, payload_size=4096 00:23:36.800 [2024-04-26 12:20:30.235357] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235364] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235369] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.800 [2024-04-26 12:20:30.235383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.800 [2024-04-26 12:20:30.235387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b8100) on tqpair=0x236f360 00:23:36.800 [2024-04-26 12:20:30.235412] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.800 ===================================================== 00:23:36.800 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.800 ===================================================== 00:23:36.800 Controller Capabilities/Features 00:23:36.800 ================================ 00:23:36.800 Vendor ID: 8086 00:23:36.800 Subsystem Vendor ID: 8086 00:23:36.800 Serial Number: SPDK00000000000001 00:23:36.800 Model Number: SPDK bdev Controller 00:23:36.800 Firmware Version: 24.05 00:23:36.800 Recommended Arb Burst: 6 00:23:36.800 IEEE OUI Identifier: e4 d2 5c 00:23:36.800 Multi-path I/O 00:23:36.800 May have multiple subsystem ports: Yes 00:23:36.800 May have multiple controllers: Yes 00:23:36.800 Associated with SR-IOV VF: No 00:23:36.800 Max Data Transfer Size: 131072 00:23:36.800 Max Number of Namespaces: 32 00:23:36.800 Max Number of I/O Queues: 127 00:23:36.800 NVMe Specification Version (VS): 1.3 00:23:36.800 NVMe Specification Version (Identify): 1.3 00:23:36.800 Maximum Queue Entries: 128 00:23:36.800 Contiguous Queues Required: Yes 00:23:36.800 Arbitration Mechanisms Supported 00:23:36.800 Weighted Round Robin: Not Supported 00:23:36.800 Vendor Specific: Not Supported 00:23:36.800 Reset Timeout: 15000 ms 00:23:36.800 Doorbell Stride: 4 bytes 00:23:36.800 NVM Subsystem Reset: Not Supported 00:23:36.800 Command Sets Supported 00:23:36.800 NVM Command Set: Supported 00:23:36.800 Boot Partition: Not Supported 00:23:36.800 Memory Page Size Minimum: 4096 bytes 00:23:36.800 Memory Page Size Maximum: 4096 bytes 00:23:36.800 Persistent Memory Region: Not Supported 00:23:36.800 Optional Asynchronous Events Supported 00:23:36.800 Namespace Attribute Notices: Supported 00:23:36.800 Firmware Activation Notices: Not Supported 00:23:36.800 ANA Change Notices: Not Supported 00:23:36.800 PLE Aggregate Log Change Notices: Not Supported 00:23:36.800 LBA Status Info Alert Notices: Not Supported 00:23:36.800 EGE Aggregate Log Change Notices: Not Supported 00:23:36.800 Normal NVM Subsystem Shutdown event: Not Supported 00:23:36.800 Zone Descriptor Change Notices: Not Supported 00:23:36.800 Discovery Log Change Notices: Not Supported 00:23:36.800 Controller Attributes 00:23:36.800 128-bit Host Identifier: Supported 00:23:36.800 Non-Operational Permissive Mode: Not Supported 00:23:36.800 NVM Sets: Not Supported 00:23:36.800 Read Recovery Levels: Not Supported 00:23:36.800 Endurance Groups: Not Supported 00:23:36.800 Predictable Latency Mode: Not Supported 00:23:36.800 Traffic Based Keep ALive: Not Supported 00:23:36.800 Namespace Granularity: Not Supported 00:23:36.800 SQ Associations: Not Supported 00:23:36.800 UUID List: Not Supported 00:23:36.800 Multi-Domain Subsystem: Not Supported 00:23:36.800 Fixed Capacity Management: Not Supported 00:23:36.800 Variable Capacity Management: Not Supported 00:23:36.800 Delete Endurance Group: Not Supported 00:23:36.800 Delete NVM Set: Not Supported 00:23:36.800 Extended LBA Formats Supported: Not Supported 00:23:36.800 Flexible Data Placement Supported: Not Supported 00:23:36.800 00:23:36.800 Controller Memory Buffer Support 00:23:36.800 ================================ 00:23:36.800 Supported: No 00:23:36.800 00:23:36.800 Persistent Memory Region Support 00:23:36.800 ================================ 00:23:36.800 Supported: No 00:23:36.800 00:23:36.800 Admin Command Set Attributes 00:23:36.800 ============================ 00:23:36.800 Security Send/Receive: Not Supported 00:23:36.800 Format NVM: Not Supported 00:23:36.800 Firmware Activate/Download: Not Supported 00:23:36.800 Namespace Management: Not Supported 00:23:36.800 Device Self-Test: Not Supported 00:23:36.800 Directives: Not Supported 00:23:36.800 NVMe-MI: Not Supported 00:23:36.800 Virtualization Management: Not Supported 00:23:36.800 Doorbell Buffer Config: Not Supported 00:23:36.800 Get LBA Status Capability: Not Supported 00:23:36.800 Command & Feature Lockdown Capability: Not Supported 00:23:36.800 Abort Command Limit: 4 00:23:36.800 Async Event Request Limit: 4 00:23:36.800 Number of Firmware Slots: N/A 00:23:36.800 Firmware Slot 1 Read-Only: N/A 00:23:36.800 Firmware Activation Without Reset: [2024-04-26 12:20:30.235420] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.800 [2024-04-26 12:20:30.235424] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235428] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7fa0) on tqpair=0x236f360 00:23:36.800 [2024-04-26 12:20:30.235439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.800 [2024-04-26 12:20:30.235446] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.800 [2024-04-26 12:20:30.235450] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235454] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b8260) on tqpair=0x236f360 00:23:36.800 [2024-04-26 12:20:30.235462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.800 [2024-04-26 12:20:30.235468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.800 [2024-04-26 12:20:30.235472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.800 [2024-04-26 12:20:30.235476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b83c0) on tqpair=0x236f360 00:23:36.800 N/A 00:23:36.800 Multiple Update Detection Support: N/A 00:23:36.800 Firmware Update Granularity: No Information Provided 00:23:36.800 Per-Namespace SMART Log: No 00:23:36.800 Asymmetric Namespace Access Log Page: Not Supported 00:23:36.800 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:36.800 Command Effects Log Page: Supported 00:23:36.800 Get Log Page Extended Data: Supported 00:23:36.800 Telemetry Log Pages: Not Supported 00:23:36.800 Persistent Event Log Pages: Not Supported 00:23:36.800 Supported Log Pages Log Page: May Support 00:23:36.800 Commands Supported & Effects Log Page: Not Supported 00:23:36.800 Feature Identifiers & Effects Log Page:May Support 00:23:36.800 NVMe-MI Commands & Effects Log Page: May Support 00:23:36.800 Data Area 4 for Telemetry Log: Not Supported 00:23:36.800 Error Log Page Entries Supported: 128 00:23:36.800 Keep Alive: Supported 00:23:36.800 Keep Alive Granularity: 10000 ms 00:23:36.800 00:23:36.800 NVM Command Set Attributes 00:23:36.800 ========================== 00:23:36.800 Submission Queue Entry Size 00:23:36.800 Max: 64 00:23:36.800 Min: 64 00:23:36.800 Completion Queue Entry Size 00:23:36.800 Max: 16 00:23:36.800 Min: 16 00:23:36.800 Number of Namespaces: 32 00:23:36.800 Compare Command: Supported 00:23:36.800 Write Uncorrectable Command: Not Supported 00:23:36.800 Dataset Management Command: Supported 00:23:36.800 Write Zeroes Command: Supported 00:23:36.800 Set Features Save Field: Not Supported 00:23:36.800 Reservations: Supported 00:23:36.800 Timestamp: Not Supported 00:23:36.800 Copy: Supported 00:23:36.800 Volatile Write Cache: Present 00:23:36.801 Atomic Write Unit (Normal): 1 00:23:36.801 Atomic Write Unit (PFail): 1 00:23:36.801 Atomic Compare & Write Unit: 1 00:23:36.801 Fused Compare & Write: Supported 00:23:36.801 Scatter-Gather List 00:23:36.801 SGL Command Set: Supported 00:23:36.801 SGL Keyed: Supported 00:23:36.801 SGL Bit Bucket Descriptor: Not Supported 00:23:36.801 SGL Metadata Pointer: Not Supported 00:23:36.801 Oversized SGL: Not Supported 00:23:36.801 SGL Metadata Address: Not Supported 00:23:36.801 SGL Offset: Supported 00:23:36.801 Transport SGL Data Block: Not Supported 00:23:36.801 Replay Protected Memory Block: Not Supported 00:23:36.801 00:23:36.801 Firmware Slot Information 00:23:36.801 ========================= 00:23:36.801 Active slot: 1 00:23:36.801 Slot 1 Firmware Revision: 24.05 00:23:36.801 00:23:36.801 00:23:36.801 Commands Supported and Effects 00:23:36.801 ============================== 00:23:36.801 Admin Commands 00:23:36.801 -------------- 00:23:36.801 Get Log Page (02h): Supported 00:23:36.801 Identify (06h): Supported 00:23:36.801 Abort (08h): Supported 00:23:36.801 Set Features (09h): Supported 00:23:36.801 Get Features (0Ah): Supported 00:23:36.801 Asynchronous Event Request (0Ch): Supported 00:23:36.801 Keep Alive (18h): Supported 00:23:36.801 I/O Commands 00:23:36.801 ------------ 00:23:36.801 Flush (00h): Supported LBA-Change 00:23:36.801 Write (01h): Supported LBA-Change 00:23:36.801 Read (02h): Supported 00:23:36.801 Compare (05h): Supported 00:23:36.801 Write Zeroes (08h): Supported LBA-Change 00:23:36.801 Dataset Management (09h): Supported LBA-Change 00:23:36.801 Copy (19h): Supported LBA-Change 00:23:36.801 Unknown (79h): Supported LBA-Change 00:23:36.801 Unknown (7Ah): Supported 00:23:36.801 00:23:36.801 Error Log 00:23:36.801 ========= 00:23:36.801 00:23:36.801 Arbitration 00:23:36.801 =========== 00:23:36.801 Arbitration Burst: 1 00:23:36.801 00:23:36.801 Power Management 00:23:36.801 ================ 00:23:36.801 Number of Power States: 1 00:23:36.801 Current Power State: Power State #0 00:23:36.801 Power State #0: 00:23:36.801 Max Power: 0.00 W 00:23:36.801 Non-Operational State: Operational 00:23:36.801 Entry Latency: Not Reported 00:23:36.801 Exit Latency: Not Reported 00:23:36.801 Relative Read Throughput: 0 00:23:36.801 Relative Read Latency: 0 00:23:36.801 Relative Write Throughput: 0 00:23:36.801 Relative Write Latency: 0 00:23:36.801 Idle Power: Not Reported 00:23:36.801 Active Power: Not Reported 00:23:36.801 Non-Operational Permissive Mode: Not Supported 00:23:36.801 00:23:36.801 Health Information 00:23:36.801 ================== 00:23:36.801 Critical Warnings: 00:23:36.801 Available Spare Space: OK 00:23:36.801 Temperature: OK 00:23:36.801 Device Reliability: OK 00:23:36.801 Read Only: No 00:23:36.801 Volatile Memory Backup: OK 00:23:36.801 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:36.801 Temperature Threshold: [2024-04-26 12:20:30.235595] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x236f360) 00:23:36.801 [2024-04-26 12:20:30.235611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.801 [2024-04-26 12:20:30.235636] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b83c0, cid 7, qid 0 00:23:36.801 [2024-04-26 12:20:30.235692] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.801 [2024-04-26 12:20:30.235700] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.801 [2024-04-26 12:20:30.235703] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235708] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b83c0) on tqpair=0x236f360 00:23:36.801 [2024-04-26 12:20:30.235744] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:36.801 [2024-04-26 12:20:30.235758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.801 [2024-04-26 12:20:30.235765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.801 [2024-04-26 12:20:30.235772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.801 [2024-04-26 12:20:30.235779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.801 [2024-04-26 12:20:30.235788] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235793] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235797] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f360) 00:23:36.801 [2024-04-26 12:20:30.235805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.801 [2024-04-26 12:20:30.235827] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7e40, cid 3, qid 0 00:23:36.801 [2024-04-26 12:20:30.235886] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.801 [2024-04-26 12:20:30.235893] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.801 [2024-04-26 12:20:30.235897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235901] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7e40) on tqpair=0x236f360 00:23:36.801 [2024-04-26 12:20:30.235911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.235919] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f360) 00:23:36.801 [2024-04-26 12:20:30.235927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.801 [2024-04-26 12:20:30.235947] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7e40, cid 3, qid 0 00:23:36.801 [2024-04-26 12:20:30.236106] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.801 [2024-04-26 12:20:30.236112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.801 [2024-04-26 12:20:30.236116] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.236120] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7e40) on tqpair=0x236f360 00:23:36.801 [2024-04-26 12:20:30.236127] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:36.801 [2024-04-26 12:20:30.236132] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:36.801 [2024-04-26 12:20:30.236142] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.236147] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.801 [2024-04-26 12:20:30.236151] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x236f360) 00:23:36.801 [2024-04-26 12:20:30.236158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.801 [2024-04-26 12:20:30.240183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23b7e40, cid 3, qid 0 00:23:36.801 [2024-04-26 12:20:30.240212] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.801 [2024-04-26 12:20:30.240220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.801 [2024-04-26 12:20:30.240225] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.802 [2024-04-26 12:20:30.240229] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23b7e40) on tqpair=0x236f360 00:23:36.802 [2024-04-26 12:20:30.240241] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:23:36.802 0 Kelvin (-273 Celsius) 00:23:36.802 Available Spare: 0% 00:23:36.802 Available Spare Threshold: 0% 00:23:36.802 Life Percentage Used: 0% 00:23:36.802 Data Units Read: 0 00:23:36.802 Data Units Written: 0 00:23:36.802 Host Read Commands: 0 00:23:36.802 Host Write Commands: 0 00:23:36.802 Controller Busy Time: 0 minutes 00:23:36.802 Power Cycles: 0 00:23:36.802 Power On Hours: 0 hours 00:23:36.802 Unsafe Shutdowns: 0 00:23:36.802 Unrecoverable Media Errors: 0 00:23:36.802 Lifetime Error Log Entries: 0 00:23:36.802 Warning Temperature Time: 0 minutes 00:23:36.802 Critical Temperature Time: 0 minutes 00:23:36.802 00:23:36.802 Number of Queues 00:23:36.802 ================ 00:23:36.802 Number of I/O Submission Queues: 127 00:23:36.802 Number of I/O Completion Queues: 127 00:23:36.802 00:23:36.802 Active Namespaces 00:23:36.802 ================= 00:23:36.802 Namespace ID:1 00:23:36.802 Error Recovery Timeout: Unlimited 00:23:36.802 Command Set Identifier: NVM (00h) 00:23:36.802 Deallocate: Supported 00:23:36.802 Deallocated/Unwritten Error: Not Supported 00:23:36.802 Deallocated Read Value: Unknown 00:23:36.802 Deallocate in Write Zeroes: Not Supported 00:23:36.802 Deallocated Guard Field: 0xFFFF 00:23:36.802 Flush: Supported 00:23:36.802 Reservation: Supported 00:23:36.802 Namespace Sharing Capabilities: Multiple Controllers 00:23:36.802 Size (in LBAs): 131072 (0GiB) 00:23:36.802 Capacity (in LBAs): 131072 (0GiB) 00:23:36.802 Utilization (in LBAs): 131072 (0GiB) 00:23:36.802 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:36.802 EUI64: ABCDEF0123456789 00:23:36.802 UUID: ee009624-b220-442a-b2f0-3368ba9626b4 00:23:36.802 Thin Provisioning: Not Supported 00:23:36.802 Per-NS Atomic Units: Yes 00:23:36.802 Atomic Boundary Size (Normal): 0 00:23:36.802 Atomic Boundary Size (PFail): 0 00:23:36.802 Atomic Boundary Offset: 0 00:23:36.802 Maximum Single Source Range Length: 65535 00:23:36.802 Maximum Copy Length: 65535 00:23:36.802 Maximum Source Range Count: 1 00:23:36.802 NGUID/EUI64 Never Reused: No 00:23:36.802 Namespace Write Protected: No 00:23:36.802 Number of LBA Formats: 1 00:23:36.802 Current LBA Format: LBA Format #00 00:23:36.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:36.802 00:23:36.802 12:20:30 -- host/identify.sh@51 -- # sync 00:23:37.068 12:20:30 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.068 12:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.068 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.068 12:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.069 12:20:30 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:37.069 12:20:30 -- host/identify.sh@56 -- # nvmftestfini 00:23:37.069 12:20:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:37.069 12:20:30 -- nvmf/common.sh@117 -- # sync 00:23:37.069 12:20:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.069 12:20:30 -- nvmf/common.sh@120 -- # set +e 00:23:37.069 12:20:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.069 12:20:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.069 rmmod nvme_tcp 00:23:37.069 rmmod nvme_fabrics 00:23:37.069 rmmod nvme_keyring 00:23:37.069 12:20:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.069 12:20:30 -- nvmf/common.sh@124 -- # set -e 00:23:37.069 12:20:30 -- nvmf/common.sh@125 -- # return 0 00:23:37.069 12:20:30 -- nvmf/common.sh@478 -- # '[' -n 71734 ']' 00:23:37.069 12:20:30 -- nvmf/common.sh@479 -- # killprocess 71734 00:23:37.069 12:20:30 -- common/autotest_common.sh@936 -- # '[' -z 71734 ']' 00:23:37.069 12:20:30 -- common/autotest_common.sh@940 -- # kill -0 71734 00:23:37.069 12:20:30 -- common/autotest_common.sh@941 -- # uname 00:23:37.069 12:20:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.069 12:20:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71734 00:23:37.069 12:20:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:37.069 12:20:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:37.069 killing process with pid 71734 00:23:37.069 12:20:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71734' 00:23:37.069 12:20:30 -- common/autotest_common.sh@955 -- # kill 71734 00:23:37.069 [2024-04-26 12:20:30.374047] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:37.069 12:20:30 -- common/autotest_common.sh@960 -- # wait 71734 00:23:37.325 12:20:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:37.325 12:20:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:37.325 12:20:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:37.325 12:20:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.325 12:20:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.325 12:20:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.325 12:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.325 12:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.325 12:20:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:37.325 00:23:37.325 real 0m2.509s 00:23:37.325 user 0m6.982s 00:23:37.325 sys 0m0.641s 00:23:37.325 12:20:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:37.325 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.325 ************************************ 00:23:37.325 END TEST nvmf_identify 00:23:37.325 ************************************ 00:23:37.325 12:20:30 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:37.325 12:20:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:37.325 12:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:37.325 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.584 ************************************ 00:23:37.584 START TEST nvmf_perf 00:23:37.584 ************************************ 00:23:37.584 12:20:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:37.584 * Looking for test storage... 00:23:37.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:37.584 12:20:30 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:37.584 12:20:30 -- nvmf/common.sh@7 -- # uname -s 00:23:37.584 12:20:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.584 12:20:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.584 12:20:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.584 12:20:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.584 12:20:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.584 12:20:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.584 12:20:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.584 12:20:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.584 12:20:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.584 12:20:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.584 12:20:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:37.584 12:20:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:37.584 12:20:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.584 12:20:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.584 12:20:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:37.584 12:20:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.584 12:20:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:37.584 12:20:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.584 12:20:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.584 12:20:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.584 12:20:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.584 12:20:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.584 12:20:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.584 12:20:30 -- paths/export.sh@5 -- # export PATH 00:23:37.584 12:20:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.584 12:20:30 -- nvmf/common.sh@47 -- # : 0 00:23:37.584 12:20:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.584 12:20:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.584 12:20:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.584 12:20:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.584 12:20:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.584 12:20:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.584 12:20:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.584 12:20:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.584 12:20:30 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:37.584 12:20:30 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:37.584 12:20:30 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.584 12:20:30 -- host/perf.sh@17 -- # nvmftestinit 00:23:37.584 12:20:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:37.584 12:20:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.584 12:20:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:37.584 12:20:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:37.584 12:20:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:37.584 12:20:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.584 12:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.584 12:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.584 12:20:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:37.584 12:20:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:37.584 12:20:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:37.584 12:20:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:37.584 12:20:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:37.584 12:20:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:37.584 12:20:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.584 12:20:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.584 12:20:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:37.584 12:20:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:37.584 12:20:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:37.584 12:20:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:37.584 12:20:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:37.584 12:20:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.584 12:20:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:37.584 12:20:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:37.584 12:20:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:37.584 12:20:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:37.584 12:20:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:37.584 12:20:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:37.584 Cannot find device "nvmf_tgt_br" 00:23:37.584 12:20:30 -- nvmf/common.sh@155 -- # true 00:23:37.584 12:20:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.584 Cannot find device "nvmf_tgt_br2" 00:23:37.584 12:20:30 -- nvmf/common.sh@156 -- # true 00:23:37.584 12:20:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:37.584 12:20:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:37.584 Cannot find device "nvmf_tgt_br" 00:23:37.584 12:20:30 -- nvmf/common.sh@158 -- # true 00:23:37.584 12:20:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:37.584 Cannot find device "nvmf_tgt_br2" 00:23:37.584 12:20:30 -- nvmf/common.sh@159 -- # true 00:23:37.584 12:20:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:37.584 12:20:31 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:37.584 12:20:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.584 12:20:31 -- nvmf/common.sh@162 -- # true 00:23:37.584 12:20:31 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.843 12:20:31 -- nvmf/common.sh@163 -- # true 00:23:37.843 12:20:31 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.843 12:20:31 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.843 12:20:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.843 12:20:31 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.843 12:20:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.843 12:20:31 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.843 12:20:31 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.843 12:20:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:37.843 12:20:31 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:37.843 12:20:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:37.843 12:20:31 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:37.843 12:20:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:37.843 12:20:31 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:37.843 12:20:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.843 12:20:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.843 12:20:31 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.843 12:20:31 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:37.843 12:20:31 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:37.843 12:20:31 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.843 12:20:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.843 12:20:31 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.843 12:20:31 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.843 12:20:31 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.844 12:20:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:37.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:37.844 00:23:37.844 --- 10.0.0.2 ping statistics --- 00:23:37.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.844 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:37.844 12:20:31 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:37.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:37.844 00:23:37.844 --- 10.0.0.3 ping statistics --- 00:23:37.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.844 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:37.844 12:20:31 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:37.844 00:23:37.844 --- 10.0.0.1 ping statistics --- 00:23:37.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.844 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:37.844 12:20:31 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.844 12:20:31 -- nvmf/common.sh@422 -- # return 0 00:23:37.844 12:20:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:37.844 12:20:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.844 12:20:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:37.844 12:20:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:37.844 12:20:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.844 12:20:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:37.844 12:20:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:37.844 12:20:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:37.844 12:20:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:37.844 12:20:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:37.844 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:23:37.844 12:20:31 -- nvmf/common.sh@470 -- # nvmfpid=71945 00:23:37.844 12:20:31 -- nvmf/common.sh@471 -- # waitforlisten 71945 00:23:37.844 12:20:31 -- common/autotest_common.sh@817 -- # '[' -z 71945 ']' 00:23:37.844 12:20:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.844 12:20:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:37.844 12:20:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.844 12:20:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.844 12:20:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:37.844 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.113 [2024-04-26 12:20:31.332095] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:38.113 [2024-04-26 12:20:31.332227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.113 [2024-04-26 12:20:31.474260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.371 [2024-04-26 12:20:31.635197] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.371 [2024-04-26 12:20:31.635534] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.371 [2024-04-26 12:20:31.635704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.371 [2024-04-26 12:20:31.635863] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.371 [2024-04-26 12:20:31.635998] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.371 [2024-04-26 12:20:31.636328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.371 [2024-04-26 12:20:31.636578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.371 [2024-04-26 12:20:31.636752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.371 [2024-04-26 12:20:31.637080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.936 12:20:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:38.936 12:20:32 -- common/autotest_common.sh@850 -- # return 0 00:23:38.936 12:20:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:38.936 12:20:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:38.936 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:38.936 12:20:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.936 12:20:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:38.936 12:20:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:39.503 12:20:32 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:39.503 12:20:32 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:39.761 12:20:33 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:39.761 12:20:33 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:40.018 12:20:33 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:40.018 12:20:33 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:40.018 12:20:33 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:40.018 12:20:33 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:40.018 12:20:33 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:40.276 [2024-04-26 12:20:33.637067] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.276 12:20:33 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.534 12:20:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:40.534 12:20:33 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.792 12:20:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:40.792 12:20:34 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:41.070 12:20:34 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.327 [2024-04-26 12:20:34.578285] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.327 12:20:34 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:41.585 12:20:34 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:41.585 12:20:34 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:41.585 12:20:34 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:41.585 12:20:34 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:42.532 Initializing NVMe Controllers 00:23:42.532 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:42.532 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:42.532 Initialization complete. Launching workers. 00:23:42.532 ======================================================== 00:23:42.532 Latency(us) 00:23:42.532 Device Information : IOPS MiB/s Average min max 00:23:42.532 PCIE (0000:00:10.0) NSID 1 from core 0: 23902.97 93.37 1338.92 319.51 6452.91 00:23:42.532 ======================================================== 00:23:42.532 Total : 23902.97 93.37 1338.92 319.51 6452.91 00:23:42.532 00:23:42.532 12:20:35 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.906 Initializing NVMe Controllers 00:23:43.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.906 Initialization complete. Launching workers. 00:23:43.906 ======================================================== 00:23:43.906 Latency(us) 00:23:43.906 Device Information : IOPS MiB/s Average min max 00:23:43.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3469.31 13.55 286.76 111.76 7132.87 00:23:43.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.62 0.48 8152.72 4876.09 12032.98 00:23:43.906 ======================================================== 00:23:43.906 Total : 3592.93 14.03 557.40 111.76 12032.98 00:23:43.906 00:23:43.906 12:20:37 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.281 Initializing NVMe Controllers 00:23:45.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:45.281 Initialization complete. Launching workers. 00:23:45.281 ======================================================== 00:23:45.281 Latency(us) 00:23:45.282 Device Information : IOPS MiB/s Average min max 00:23:45.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8403.92 32.83 3808.83 545.83 8657.92 00:23:45.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3955.96 15.45 8136.20 6057.37 16556.14 00:23:45.282 ======================================================== 00:23:45.282 Total : 12359.89 48.28 5193.87 545.83 16556.14 00:23:45.282 00:23:45.282 12:20:38 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:45.282 12:20:38 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:47.814 Initializing NVMe Controllers 00:23:47.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.814 Controller IO queue size 128, less than required. 00:23:47.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.814 Controller IO queue size 128, less than required. 00:23:47.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:47.814 Initialization complete. Launching workers. 00:23:47.814 ======================================================== 00:23:47.814 Latency(us) 00:23:47.815 Device Information : IOPS MiB/s Average min max 00:23:47.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1686.09 421.52 77318.40 36080.17 164707.87 00:23:47.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 637.78 159.44 203808.93 101432.27 325698.39 00:23:47.815 ======================================================== 00:23:47.815 Total : 2323.86 580.97 112033.29 36080.17 325698.39 00:23:47.815 00:23:47.815 12:20:41 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:48.073 No valid NVMe controllers or AIO or URING devices found 00:23:48.073 Initializing NVMe Controllers 00:23:48.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.073 Controller IO queue size 128, less than required. 00:23:48.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.073 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:48.073 Controller IO queue size 128, less than required. 00:23:48.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.073 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:48.073 WARNING: Some requested NVMe devices were skipped 00:23:48.073 12:20:41 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:50.603 Initializing NVMe Controllers 00:23:50.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.603 Controller IO queue size 128, less than required. 00:23:50.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.603 Controller IO queue size 128, less than required. 00:23:50.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:50.603 Initialization complete. Launching workers. 00:23:50.603 00:23:50.603 ==================== 00:23:50.603 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:50.603 TCP transport: 00:23:50.603 polls: 7009 00:23:50.603 idle_polls: 0 00:23:50.603 sock_completions: 7009 00:23:50.603 nvme_completions: 6371 00:23:50.603 submitted_requests: 9596 00:23:50.603 queued_requests: 1 00:23:50.603 00:23:50.603 ==================== 00:23:50.603 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:50.603 TCP transport: 00:23:50.603 polls: 7750 00:23:50.603 idle_polls: 0 00:23:50.603 sock_completions: 7750 00:23:50.603 nvme_completions: 6415 00:23:50.603 submitted_requests: 9572 00:23:50.603 queued_requests: 1 00:23:50.603 ======================================================== 00:23:50.603 Latency(us) 00:23:50.603 Device Information : IOPS MiB/s Average min max 00:23:50.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1592.40 398.10 81783.07 43057.29 143640.20 00:23:50.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1603.40 400.85 80590.66 36042.17 120561.58 00:23:50.603 ======================================================== 00:23:50.603 Total : 3195.80 798.95 81184.81 36042.17 143640.20 00:23:50.603 00:23:50.603 12:20:43 -- host/perf.sh@66 -- # sync 00:23:50.603 12:20:43 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.862 12:20:44 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:50.862 12:20:44 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:50.862 12:20:44 -- host/perf.sh@114 -- # nvmftestfini 00:23:50.862 12:20:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:50.862 12:20:44 -- nvmf/common.sh@117 -- # sync 00:23:50.862 12:20:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.862 12:20:44 -- nvmf/common.sh@120 -- # set +e 00:23:50.862 12:20:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.862 12:20:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.862 rmmod nvme_tcp 00:23:50.862 rmmod nvme_fabrics 00:23:50.862 rmmod nvme_keyring 00:23:51.121 12:20:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.121 12:20:44 -- nvmf/common.sh@124 -- # set -e 00:23:51.121 12:20:44 -- nvmf/common.sh@125 -- # return 0 00:23:51.121 12:20:44 -- nvmf/common.sh@478 -- # '[' -n 71945 ']' 00:23:51.121 12:20:44 -- nvmf/common.sh@479 -- # killprocess 71945 00:23:51.121 12:20:44 -- common/autotest_common.sh@936 -- # '[' -z 71945 ']' 00:23:51.121 12:20:44 -- common/autotest_common.sh@940 -- # kill -0 71945 00:23:51.121 12:20:44 -- common/autotest_common.sh@941 -- # uname 00:23:51.121 12:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:51.121 12:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71945 00:23:51.121 killing process with pid 71945 00:23:51.121 12:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:51.121 12:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:51.121 12:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71945' 00:23:51.121 12:20:44 -- common/autotest_common.sh@955 -- # kill 71945 00:23:51.121 12:20:44 -- common/autotest_common.sh@960 -- # wait 71945 00:23:51.688 12:20:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:51.688 12:20:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:51.688 12:20:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:51.688 12:20:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.688 12:20:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.688 12:20:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.688 12:20:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.688 12:20:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.948 12:20:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:51.948 ************************************ 00:23:51.948 END TEST nvmf_perf 00:23:51.948 ************************************ 00:23:51.948 00:23:51.948 real 0m14.367s 00:23:51.948 user 0m52.206s 00:23:51.948 sys 0m4.087s 00:23:51.948 12:20:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:51.948 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:23:51.948 12:20:45 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:51.948 12:20:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:51.948 12:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.948 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:23:51.948 ************************************ 00:23:51.948 START TEST nvmf_fio_host 00:23:51.948 ************************************ 00:23:51.948 12:20:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:51.948 * Looking for test storage... 00:23:51.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:51.948 12:20:45 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.948 12:20:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.948 12:20:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.948 12:20:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.948 12:20:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.948 12:20:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.948 12:20:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.948 12:20:45 -- paths/export.sh@5 -- # export PATH 00:23:51.948 12:20:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.948 12:20:45 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:51.948 12:20:45 -- nvmf/common.sh@7 -- # uname -s 00:23:51.948 12:20:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.948 12:20:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.948 12:20:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.948 12:20:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.948 12:20:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.948 12:20:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.948 12:20:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.948 12:20:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.948 12:20:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.948 12:20:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.948 12:20:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:51.948 12:20:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:51.948 12:20:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.948 12:20:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.948 12:20:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:51.948 12:20:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.948 12:20:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.948 12:20:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.948 12:20:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.948 12:20:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.948 12:20:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.948 12:20:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.207 12:20:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.207 12:20:45 -- paths/export.sh@5 -- # export PATH 00:23:52.207 12:20:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.207 12:20:45 -- nvmf/common.sh@47 -- # : 0 00:23:52.207 12:20:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.207 12:20:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.207 12:20:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.207 12:20:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.207 12:20:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.207 12:20:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.207 12:20:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.207 12:20:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.207 12:20:45 -- host/fio.sh@12 -- # nvmftestinit 00:23:52.207 12:20:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:52.207 12:20:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.207 12:20:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:52.207 12:20:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:52.207 12:20:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:52.207 12:20:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.207 12:20:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.207 12:20:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.207 12:20:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:52.207 12:20:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:52.207 12:20:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:52.207 12:20:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:52.207 12:20:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:52.207 12:20:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:52.207 12:20:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.207 12:20:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.207 12:20:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:52.207 12:20:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:52.207 12:20:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:52.207 12:20:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:52.207 12:20:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:52.207 12:20:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.207 12:20:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:52.207 12:20:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:52.207 12:20:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:52.207 12:20:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:52.207 12:20:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:52.207 12:20:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:52.207 Cannot find device "nvmf_tgt_br" 00:23:52.207 12:20:45 -- nvmf/common.sh@155 -- # true 00:23:52.207 12:20:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:52.207 Cannot find device "nvmf_tgt_br2" 00:23:52.207 12:20:45 -- nvmf/common.sh@156 -- # true 00:23:52.207 12:20:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:52.207 12:20:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:52.207 Cannot find device "nvmf_tgt_br" 00:23:52.207 12:20:45 -- nvmf/common.sh@158 -- # true 00:23:52.207 12:20:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:52.207 Cannot find device "nvmf_tgt_br2" 00:23:52.207 12:20:45 -- nvmf/common.sh@159 -- # true 00:23:52.207 12:20:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:52.207 12:20:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:52.207 12:20:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:52.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.207 12:20:45 -- nvmf/common.sh@162 -- # true 00:23:52.207 12:20:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:52.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.207 12:20:45 -- nvmf/common.sh@163 -- # true 00:23:52.207 12:20:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:52.207 12:20:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:52.207 12:20:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:52.207 12:20:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:52.207 12:20:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:52.207 12:20:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:52.207 12:20:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:52.207 12:20:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:52.207 12:20:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:52.207 12:20:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:52.207 12:20:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:52.207 12:20:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:52.207 12:20:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:52.207 12:20:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:52.467 12:20:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:52.467 12:20:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:52.467 12:20:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:52.467 12:20:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:52.467 12:20:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:52.467 12:20:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:52.467 12:20:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:52.467 12:20:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:52.467 12:20:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.467 12:20:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:52.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:23:52.467 00:23:52.467 --- 10.0.0.2 ping statistics --- 00:23:52.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.467 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:52.467 12:20:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:52.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:52.467 00:23:52.467 --- 10.0.0.3 ping statistics --- 00:23:52.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.467 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:52.467 12:20:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:52.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:52.467 00:23:52.467 --- 10.0.0.1 ping statistics --- 00:23:52.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.467 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:52.467 12:20:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.467 12:20:45 -- nvmf/common.sh@422 -- # return 0 00:23:52.467 12:20:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:52.467 12:20:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.467 12:20:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:52.467 12:20:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:52.467 12:20:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.467 12:20:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:52.467 12:20:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:52.467 12:20:45 -- host/fio.sh@14 -- # [[ y != y ]] 00:23:52.467 12:20:45 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:52.467 12:20:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:52.467 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.467 12:20:45 -- host/fio.sh@22 -- # nvmfpid=72356 00:23:52.467 12:20:45 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.467 12:20:45 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.467 12:20:45 -- host/fio.sh@26 -- # waitforlisten 72356 00:23:52.467 12:20:45 -- common/autotest_common.sh@817 -- # '[' -z 72356 ']' 00:23:52.467 12:20:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.467 12:20:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:52.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.467 12:20:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.467 12:20:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:52.467 12:20:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.467 [2024-04-26 12:20:45.867837] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:52.467 [2024-04-26 12:20:45.867991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.727 [2024-04-26 12:20:46.020112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.727 [2024-04-26 12:20:46.147816] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.727 [2024-04-26 12:20:46.148138] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.727 [2024-04-26 12:20:46.148424] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.727 [2024-04-26 12:20:46.148658] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.727 [2024-04-26 12:20:46.148792] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.727 [2024-04-26 12:20:46.149025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.727 [2024-04-26 12:20:46.149217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.727 [2024-04-26 12:20:46.149228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.727 [2024-04-26 12:20:46.149090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.663 12:20:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:53.663 12:20:46 -- common/autotest_common.sh@850 -- # return 0 00:23:53.663 12:20:46 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.663 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 [2024-04-26 12:20:46.878553] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.663 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.663 12:20:46 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:53.663 12:20:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 12:20:46 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:53.663 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 Malloc1 00:23:53.663 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.663 12:20:46 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.663 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.663 12:20:46 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:53.663 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.663 12:20:46 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.663 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 [2024-04-26 12:20:46.983726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.663 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.663 12:20:46 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:53.663 12:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.663 12:20:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.663 12:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.663 12:20:46 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:53.663 12:20:46 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:53.663 12:20:46 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:53.663 12:20:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:53.663 12:20:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.663 12:20:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:53.663 12:20:46 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:53.663 12:20:46 -- common/autotest_common.sh@1327 -- # shift 00:23:53.663 12:20:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:53.663 12:20:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.663 12:20:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:53.663 12:20:46 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:53.663 12:20:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:53.663 12:20:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:53.663 12:20:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:53.663 12:20:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.663 12:20:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:53.663 12:20:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:53.663 12:20:47 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:53.663 12:20:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:53.663 12:20:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:53.663 12:20:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:53.664 12:20:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:53.922 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:53.922 fio-3.35 00:23:53.922 Starting 1 thread 00:23:56.556 00:23:56.556 test: (groupid=0, jobs=1): err= 0: pid=72411: Fri Apr 26 12:20:49 2024 00:23:56.556 read: IOPS=8658, BW=33.8MiB/s (35.5MB/s)(67.9MiB/2007msec) 00:23:56.556 slat (usec): min=2, max=191, avg= 2.46, stdev= 1.83 00:23:56.556 clat (usec): min=1516, max=14244, avg=7703.69, stdev=528.35 00:23:56.556 lat (usec): min=1540, max=14246, avg=7706.15, stdev=528.20 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[ 6652], 5.00th=[ 6980], 10.00th=[ 7111], 20.00th=[ 7308], 00:23:56.556 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:23:56.556 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8291], 95.00th=[ 8455], 00:23:56.556 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[12256], 99.95th=[13304], 00:23:56.556 | 99.99th=[14222] 00:23:56.556 bw ( KiB/s): min=33888, max=35168, per=99.93%, avg=34610.00, stdev=531.32, samples=4 00:23:56.556 iops : min= 8472, max= 8792, avg=8652.50, stdev=132.83, samples=4 00:23:56.556 write: IOPS=8649, BW=33.8MiB/s (35.4MB/s)(67.8MiB/2007msec); 0 zone resets 00:23:56.556 slat (usec): min=2, max=117, avg= 2.56, stdev= 1.47 00:23:56.556 clat (usec): min=1119, max=13215, avg=7019.90, stdev=471.09 00:23:56.556 lat (usec): min=1126, max=13217, avg=7022.46, stdev=471.04 00:23:56.556 clat percentiles (usec): 00:23:56.557 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6718], 00:23:56.557 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7111], 00:23:56.557 | 70.00th=[ 7242], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:23:56.557 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[11076], 99.95th=[11600], 00:23:56.557 | 99.99th=[12649] 00:23:56.557 bw ( KiB/s): min=34288, max=34920, per=100.00%, avg=34610.00, stdev=298.65, samples=4 00:23:56.557 iops : min= 8572, max= 8730, avg=8652.50, stdev=74.66, samples=4 00:23:56.557 lat (msec) : 2=0.04%, 4=0.12%, 10=99.66%, 20=0.18% 00:23:56.557 cpu : usr=70.79%, sys=21.73%, ctx=31, majf=0, minf=6 00:23:56.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:56.557 issued rwts: total=17378,17359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:56.557 00:23:56.557 Run status group 0 (all jobs): 00:23:56.557 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.9MiB (71.2MB), run=2007-2007msec 00:23:56.557 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.8MiB (71.1MB), run=2007-2007msec 00:23:56.557 12:20:49 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:56.557 12:20:49 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:56.557 12:20:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:56.557 12:20:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.557 12:20:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:56.557 12:20:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:56.557 12:20:49 -- common/autotest_common.sh@1327 -- # shift 00:23:56.557 12:20:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:56.557 12:20:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:56.557 12:20:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:56.557 12:20:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:56.557 12:20:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:56.557 12:20:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:56.557 12:20:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:56.557 12:20:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:56.557 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:56.557 fio-3.35 00:23:56.557 Starting 1 thread 00:23:59.088 00:23:59.088 test: (groupid=0, jobs=1): err= 0: pid=72465: Fri Apr 26 12:20:51 2024 00:23:59.088 read: IOPS=8163, BW=128MiB/s (134MB/s)(256MiB/2005msec) 00:23:59.088 slat (usec): min=3, max=118, avg= 3.76, stdev= 1.61 00:23:59.088 clat (usec): min=2710, max=19024, avg=9005.59, stdev=3017.25 00:23:59.088 lat (usec): min=2714, max=19028, avg=9009.35, stdev=3017.30 00:23:59.088 clat percentiles (usec): 00:23:59.088 | 1.00th=[ 4080], 5.00th=[ 4817], 10.00th=[ 5407], 20.00th=[ 6259], 00:23:59.088 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9372], 00:23:59.088 | 70.00th=[10421], 80.00th=[11338], 90.00th=[13304], 95.00th=[15008], 00:23:59.088 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19006], 99.95th=[19006], 00:23:59.088 | 99.99th=[19006] 00:23:59.088 bw ( KiB/s): min=59872, max=68640, per=49.51%, avg=64672.00, stdev=3767.13, samples=4 00:23:59.088 iops : min= 3742, max= 4290, avg=4042.00, stdev=235.45, samples=4 00:23:59.088 write: IOPS=4628, BW=72.3MiB/s (75.8MB/s)(133MiB/1838msec); 0 zone resets 00:23:59.088 slat (usec): min=36, max=206, avg=38.05, stdev= 3.96 00:23:59.088 clat (usec): min=4820, max=21553, avg=12096.56, stdev=2318.67 00:23:59.088 lat (usec): min=4868, max=21589, avg=12134.62, stdev=2318.72 00:23:59.088 clat percentiles (usec): 00:23:59.088 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:23:59.088 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:23:59.088 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15401], 95.00th=[16581], 00:23:59.088 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19792], 99.95th=[20055], 00:23:59.088 | 99.99th=[21627] 00:23:59.088 bw ( KiB/s): min=62560, max=71680, per=91.22%, avg=67560.00, stdev=4274.80, samples=4 00:23:59.088 iops : min= 3910, max= 4480, avg=4222.50, stdev=267.17, samples=4 00:23:59.088 lat (msec) : 4=0.52%, 10=48.97%, 20=50.49%, 50=0.02% 00:23:59.088 cpu : usr=80.99%, sys=14.52%, ctx=5, majf=0, minf=23 00:23:59.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:59.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:59.088 issued rwts: total=16368,8508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:59.088 00:23:59.088 Run status group 0 (all jobs): 00:23:59.088 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (268MB), run=2005-2005msec 00:23:59.088 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=133MiB (139MB), run=1838-1838msec 00:23:59.088 12:20:51 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.088 12:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.088 12:20:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.088 12:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.088 12:20:51 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:23:59.088 12:20:51 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:59.088 12:20:51 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:23:59.088 12:20:51 -- host/fio.sh@84 -- # nvmftestfini 00:23:59.088 12:20:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:59.088 12:20:51 -- nvmf/common.sh@117 -- # sync 00:23:59.088 12:20:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.088 12:20:52 -- nvmf/common.sh@120 -- # set +e 00:23:59.088 12:20:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.088 12:20:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.088 rmmod nvme_tcp 00:23:59.088 rmmod nvme_fabrics 00:23:59.088 rmmod nvme_keyring 00:23:59.088 12:20:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.088 12:20:52 -- nvmf/common.sh@124 -- # set -e 00:23:59.088 12:20:52 -- nvmf/common.sh@125 -- # return 0 00:23:59.088 12:20:52 -- nvmf/common.sh@478 -- # '[' -n 72356 ']' 00:23:59.088 12:20:52 -- nvmf/common.sh@479 -- # killprocess 72356 00:23:59.088 12:20:52 -- common/autotest_common.sh@936 -- # '[' -z 72356 ']' 00:23:59.088 12:20:52 -- common/autotest_common.sh@940 -- # kill -0 72356 00:23:59.088 12:20:52 -- common/autotest_common.sh@941 -- # uname 00:23:59.088 12:20:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:59.088 12:20:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72356 00:23:59.088 killing process with pid 72356 00:23:59.088 12:20:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:59.088 12:20:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:59.088 12:20:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72356' 00:23:59.088 12:20:52 -- common/autotest_common.sh@955 -- # kill 72356 00:23:59.088 12:20:52 -- common/autotest_common.sh@960 -- # wait 72356 00:23:59.088 12:20:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:59.088 12:20:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:59.088 12:20:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:59.089 12:20:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.089 12:20:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.089 12:20:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.089 12:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.089 12:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.089 12:20:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:59.089 ************************************ 00:23:59.089 END TEST nvmf_fio_host 00:23:59.089 ************************************ 00:23:59.089 00:23:59.089 real 0m7.099s 00:23:59.089 user 0m27.494s 00:23:59.089 sys 0m2.114s 00:23:59.089 12:20:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:59.089 12:20:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.089 12:20:52 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:59.089 12:20:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:59.089 12:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.089 12:20:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.089 ************************************ 00:23:59.089 START TEST nvmf_failover 00:23:59.089 ************************************ 00:23:59.089 12:20:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:59.347 * Looking for test storage... 00:23:59.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:59.347 12:20:52 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.347 12:20:52 -- nvmf/common.sh@7 -- # uname -s 00:23:59.347 12:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.347 12:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.347 12:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.347 12:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.347 12:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.348 12:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.348 12:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.348 12:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.348 12:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.348 12:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.348 12:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:59.348 12:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:23:59.348 12:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.348 12:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.348 12:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:59.348 12:20:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.348 12:20:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.348 12:20:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.348 12:20:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.348 12:20:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.348 12:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.348 12:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.348 12:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.348 12:20:52 -- paths/export.sh@5 -- # export PATH 00:23:59.348 12:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.348 12:20:52 -- nvmf/common.sh@47 -- # : 0 00:23:59.348 12:20:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.348 12:20:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.348 12:20:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.348 12:20:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.348 12:20:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.348 12:20:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.348 12:20:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.348 12:20:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.348 12:20:52 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:59.348 12:20:52 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:59.348 12:20:52 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:59.348 12:20:52 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.348 12:20:52 -- host/failover.sh@18 -- # nvmftestinit 00:23:59.348 12:20:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:59.348 12:20:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.348 12:20:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:59.348 12:20:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:59.348 12:20:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:59.348 12:20:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.348 12:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.348 12:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.348 12:20:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:59.348 12:20:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:59.348 12:20:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:59.348 12:20:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:59.348 12:20:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:59.348 12:20:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:59.348 12:20:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.348 12:20:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.348 12:20:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:59.348 12:20:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:59.348 12:20:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:59.348 12:20:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:59.348 12:20:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:59.348 12:20:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.348 12:20:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:59.348 12:20:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:59.348 12:20:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:59.348 12:20:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:59.348 12:20:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:59.348 12:20:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:59.348 Cannot find device "nvmf_tgt_br" 00:23:59.348 12:20:52 -- nvmf/common.sh@155 -- # true 00:23:59.348 12:20:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.348 Cannot find device "nvmf_tgt_br2" 00:23:59.348 12:20:52 -- nvmf/common.sh@156 -- # true 00:23:59.348 12:20:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:59.348 12:20:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:59.348 Cannot find device "nvmf_tgt_br" 00:23:59.348 12:20:52 -- nvmf/common.sh@158 -- # true 00:23:59.348 12:20:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:59.348 Cannot find device "nvmf_tgt_br2" 00:23:59.348 12:20:52 -- nvmf/common.sh@159 -- # true 00:23:59.348 12:20:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:59.348 12:20:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:59.348 12:20:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:59.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.348 12:20:52 -- nvmf/common.sh@162 -- # true 00:23:59.348 12:20:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:59.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.348 12:20:52 -- nvmf/common.sh@163 -- # true 00:23:59.348 12:20:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:59.348 12:20:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:59.348 12:20:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:59.348 12:20:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:59.348 12:20:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:59.348 12:20:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:59.348 12:20:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:59.348 12:20:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:59.348 12:20:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:59.607 12:20:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:59.607 12:20:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:59.607 12:20:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:59.607 12:20:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:59.607 12:20:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:59.607 12:20:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:59.607 12:20:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:59.607 12:20:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:59.607 12:20:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:59.607 12:20:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:59.607 12:20:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:59.607 12:20:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:59.607 12:20:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:59.607 12:20:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:59.607 12:20:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:59.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:23:59.607 00:23:59.607 --- 10.0.0.2 ping statistics --- 00:23:59.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.607 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:23:59.607 12:20:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:59.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:59.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:23:59.607 00:23:59.607 --- 10.0.0.3 ping statistics --- 00:23:59.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.607 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:59.607 12:20:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:59.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:23:59.607 00:23:59.607 --- 10.0.0.1 ping statistics --- 00:23:59.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.607 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:59.607 12:20:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.607 12:20:52 -- nvmf/common.sh@422 -- # return 0 00:23:59.607 12:20:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:59.607 12:20:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.607 12:20:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:59.607 12:20:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:59.607 12:20:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.607 12:20:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:59.607 12:20:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:59.607 12:20:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:59.607 12:20:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:59.607 12:20:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.607 12:20:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.607 12:20:52 -- nvmf/common.sh@470 -- # nvmfpid=72673 00:23:59.607 12:20:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:59.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.607 12:20:52 -- nvmf/common.sh@471 -- # waitforlisten 72673 00:23:59.607 12:20:52 -- common/autotest_common.sh@817 -- # '[' -z 72673 ']' 00:23:59.607 12:20:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.607 12:20:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:59.607 12:20:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.607 12:20:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:59.607 12:20:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.607 [2024-04-26 12:20:53.000358] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:23:59.607 [2024-04-26 12:20:53.000463] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.865 [2024-04-26 12:20:53.137446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.865 [2024-04-26 12:20:53.253358] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.865 [2024-04-26 12:20:53.253626] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.865 [2024-04-26 12:20:53.253709] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.865 [2024-04-26 12:20:53.253787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.865 [2024-04-26 12:20:53.253866] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.865 [2024-04-26 12:20:53.254060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.865 [2024-04-26 12:20:53.254261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.865 [2024-04-26 12:20:53.254273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.797 12:20:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.797 12:20:53 -- common/autotest_common.sh@850 -- # return 0 00:24:00.797 12:20:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.797 12:20:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.797 12:20:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.797 12:20:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.797 12:20:53 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:00.797 [2024-04-26 12:20:54.192590] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.797 12:20:54 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:01.054 Malloc0 00:24:01.313 12:20:54 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.571 12:20:54 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.829 12:20:55 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.829 [2024-04-26 12:20:55.284316] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.087 12:20:55 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.087 [2024-04-26 12:20:55.508485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.087 12:20:55 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:02.346 [2024-04-26 12:20:55.732687] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:02.346 12:20:55 -- host/failover.sh@31 -- # bdevperf_pid=72736 00:24:02.346 12:20:55 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:02.346 12:20:55 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.346 12:20:55 -- host/failover.sh@34 -- # waitforlisten 72736 /var/tmp/bdevperf.sock 00:24:02.346 12:20:55 -- common/autotest_common.sh@817 -- # '[' -z 72736 ']' 00:24:02.346 12:20:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.346 12:20:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.346 12:20:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.346 12:20:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.346 12:20:55 -- common/autotest_common.sh@10 -- # set +x 00:24:03.281 12:20:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:03.281 12:20:56 -- common/autotest_common.sh@850 -- # return 0 00:24:03.281 12:20:56 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.850 NVMe0n1 00:24:03.850 12:20:57 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.109 00:24:04.109 12:20:57 -- host/failover.sh@39 -- # run_test_pid=72754 00:24:04.109 12:20:57 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.109 12:20:57 -- host/failover.sh@41 -- # sleep 1 00:24:05.044 12:20:58 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.302 [2024-04-26 12:20:58.602314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.302 [2024-04-26 12:20:58.602378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.302 [2024-04-26 12:20:58.602390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.302 [2024-04-26 12:20:58.602400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.602997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.303 [2024-04-26 12:20:58.603145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 [2024-04-26 12:20:58.603354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13740c0 is same with the state(5) to be set 00:24:05.304 12:20:58 -- host/failover.sh@45 -- # sleep 3 00:24:08.587 12:21:01 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:08.587 00:24:08.587 12:21:02 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.845 12:21:02 -- host/failover.sh@50 -- # sleep 3 00:24:12.127 12:21:05 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.127 [2024-04-26 12:21:05.558621] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.127 12:21:05 -- host/failover.sh@55 -- # sleep 1 00:24:13.501 12:21:06 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:13.501 [2024-04-26 12:21:06.799521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 [2024-04-26 12:21:06.799899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ab50 is same with the state(5) to be set 00:24:13.501 12:21:06 -- host/failover.sh@59 -- # wait 72754 00:24:20.064 0 00:24:20.064 12:21:12 -- host/failover.sh@61 -- # killprocess 72736 00:24:20.064 12:21:12 -- common/autotest_common.sh@936 -- # '[' -z 72736 ']' 00:24:20.064 12:21:12 -- common/autotest_common.sh@940 -- # kill -0 72736 00:24:20.064 12:21:12 -- common/autotest_common.sh@941 -- # uname 00:24:20.064 12:21:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:20.064 12:21:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72736 00:24:20.064 12:21:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:20.064 12:21:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:20.064 12:21:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72736' 00:24:20.064 killing process with pid 72736 00:24:20.064 12:21:12 -- common/autotest_common.sh@955 -- # kill 72736 00:24:20.064 12:21:12 -- common/autotest_common.sh@960 -- # wait 72736 00:24:20.064 12:21:12 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:20.064 [2024-04-26 12:20:55.805541] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:24:20.064 [2024-04-26 12:20:55.805667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72736 ] 00:24:20.064 [2024-04-26 12:20:55.946726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.064 [2024-04-26 12:20:56.068120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.064 Running I/O for 15 seconds... 00:24:20.064 [2024-04-26 12:20:58.603413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.603981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.603997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.604012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.604027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.604042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.604057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.604071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.604086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.604100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.604116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.604129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.064 [2024-04-26 12:20:58.604144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.064 [2024-04-26 12:20:58.604158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.604959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.604978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.065 [2024-04-26 12:20:58.605399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.065 [2024-04-26 12:20:58.605416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.605982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.605998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.066 [2024-04-26 12:20:58.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.066 [2024-04-26 12:20:58.606598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.606904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.606933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.606975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.606990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:20:58.607394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.067 [2024-04-26 12:20:58.607423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b109e0 is same with the state(5) to be set 00:24:20.067 [2024-04-26 12:20:58.607466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.067 [2024-04-26 12:20:58.607478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.067 [2024-04-26 12:20:58.607494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:24:20.067 [2024-04-26 12:20:58.607507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607583] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b109e0 was disconnected and freed. reset controller. 00:24:20.067 [2024-04-26 12:20:58.607602] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:20.067 [2024-04-26 12:20:58.607656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.067 [2024-04-26 12:20:58.607677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.067 [2024-04-26 12:20:58.607707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.067 [2024-04-26 12:20:58.607734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.067 [2024-04-26 12:20:58.607761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:20:58.607775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.067 [2024-04-26 12:20:58.611668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.067 [2024-04-26 12:20:58.611712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaa230 (9): Bad file descriptor 00:24:20.067 [2024-04-26 12:20:58.653334] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.067 [2024-04-26 12:21:02.289397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:21:02.289467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:21:02.289500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:21:02.289540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:21:02.289558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:21:02.289573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:21:02.289588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.067 [2024-04-26 12:21:02.289602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.067 [2024-04-26 12:21:02.289617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.289631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.289661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.289691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.289721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.289976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.289991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.068 [2024-04-26 12:21:02.290231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.068 [2024-04-26 12:21:02.290564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.068 [2024-04-26 12:21:02.290578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.290608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.290638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.290667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.290697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.290732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.290980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.290996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.069 [2024-04-26 12:21:02.291753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.069 [2024-04-26 12:21:02.291827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-04-26 12:21:02.291842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.291857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.291871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.291886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.291900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.291922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.291937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.291952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.291981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.291995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.292737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-04-26 12:21:02.292977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.292993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.293007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.293022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.293036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-04-26 12:21:02.293052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.070 [2024-04-26 12:21:02.293066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:02.293102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:02.293133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:02.293163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:02.293208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:02.293237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:02.293454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af7100 is same with the state(5) to be set 00:24:20.071 [2024-04-26 12:21:02.293493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.071 [2024-04-26 12:21:02.293505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.071 [2024-04-26 12:21:02.293516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76264 len:8 PRP1 0x0 PRP2 0x0 00:24:20.071 [2024-04-26 12:21:02.293529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293589] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1af7100 was disconnected and freed. reset controller. 00:24:20.071 [2024-04-26 12:21:02.293608] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:20.071 [2024-04-26 12:21:02.293663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.071 [2024-04-26 12:21:02.293684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.071 [2024-04-26 12:21:02.293713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.071 [2024-04-26 12:21:02.293741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.071 [2024-04-26 12:21:02.293768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:02.293782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.071 [2024-04-26 12:21:02.297602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.071 [2024-04-26 12:21:02.297644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaa230 (9): Bad file descriptor 00:24:20.071 [2024-04-26 12:21:02.329503] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.071 [2024-04-26 12:21:06.799971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.071 [2024-04-26 12:21:06.800514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:06.800545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:06.800585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.071 [2024-04-26 12:21:06.800616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.071 [2024-04-26 12:21:06.800632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.800681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.800711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.800740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.800770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.800970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.800985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.072 [2024-04-26 12:21:06.801769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.072 [2024-04-26 12:21:06.801859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.072 [2024-04-26 12:21:06.801875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.801889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.801904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.801919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.801934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.801948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.801964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.801978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.801993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.073 [2024-04-26 12:21:06.802759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.073 [2024-04-26 12:21:06.802863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.073 [2024-04-26 12:21:06.802877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.802893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.802929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.802943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.802959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.802973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.802988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.074 [2024-04-26 12:21:06.803516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.074 [2024-04-26 12:21:06.803980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.803995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af7290 is same with the state(5) to be set 00:24:20.074 [2024-04-26 12:21:06.804013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:20.074 [2024-04-26 12:21:06.804023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:20.074 [2024-04-26 12:21:06.804034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14800 len:8 PRP1 0x0 PRP2 0x0 00:24:20.074 [2024-04-26 12:21:06.804048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.804112] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1af7290 was disconnected and freed. reset controller. 00:24:20.074 [2024-04-26 12:21:06.804130] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:20.074 [2024-04-26 12:21:06.804198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.074 [2024-04-26 12:21:06.804221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.074 [2024-04-26 12:21:06.804247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.075 [2024-04-26 12:21:06.804266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.075 [2024-04-26 12:21:06.804281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.075 [2024-04-26 12:21:06.804295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.075 [2024-04-26 12:21:06.804309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.075 [2024-04-26 12:21:06.804322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.075 [2024-04-26 12:21:06.804336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:20.075 [2024-04-26 12:21:06.804385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaa230 (9): Bad file descriptor 00:24:20.075 [2024-04-26 12:21:06.808160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:20.075 [2024-04-26 12:21:06.847266] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.075 00:24:20.075 Latency(us) 00:24:20.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.075 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:20.075 Verification LBA range: start 0x0 length 0x4000 00:24:20.075 NVMe0n1 : 15.01 8817.80 34.44 233.24 0.00 14110.20 659.08 18230.92 00:24:20.075 =================================================================================================================== 00:24:20.075 Total : 8817.80 34.44 233.24 0.00 14110.20 659.08 18230.92 00:24:20.075 Received shutdown signal, test time was about 15.000000 seconds 00:24:20.075 00:24:20.075 Latency(us) 00:24:20.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.075 =================================================================================================================== 00:24:20.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.075 12:21:12 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:20.075 12:21:12 -- host/failover.sh@65 -- # count=3 00:24:20.075 12:21:12 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:20.075 12:21:12 -- host/failover.sh@73 -- # bdevperf_pid=72932 00:24:20.075 12:21:12 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:20.075 12:21:12 -- host/failover.sh@75 -- # waitforlisten 72932 /var/tmp/bdevperf.sock 00:24:20.075 12:21:12 -- common/autotest_common.sh@817 -- # '[' -z 72932 ']' 00:24:20.075 12:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.075 12:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:20.075 12:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.075 12:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:20.075 12:21:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.642 12:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.642 12:21:13 -- common/autotest_common.sh@850 -- # return 0 00:24:20.642 12:21:13 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:20.642 [2024-04-26 12:21:14.020875] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:20.642 12:21:14 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:20.924 [2024-04-26 12:21:14.321130] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:20.924 12:21:14 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.183 NVMe0n1 00:24:21.183 12:21:14 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.750 00:24:21.750 12:21:14 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.009 00:24:22.009 12:21:15 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:22.009 12:21:15 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:22.268 12:21:15 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.268 12:21:15 -- host/failover.sh@87 -- # sleep 3 00:24:25.554 12:21:18 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.554 12:21:18 -- host/failover.sh@88 -- # grep -q NVMe0 00:24:25.554 12:21:18 -- host/failover.sh@90 -- # run_test_pid=73014 00:24:25.554 12:21:18 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.554 12:21:18 -- host/failover.sh@92 -- # wait 73014 00:24:26.943 0 00:24:26.943 12:21:20 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:26.943 [2024-04-26 12:21:12.823591] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:24:26.943 [2024-04-26 12:21:12.824449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72932 ] 00:24:26.943 [2024-04-26 12:21:12.959092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.943 [2024-04-26 12:21:13.068033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.943 [2024-04-26 12:21:15.712903] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:26.943 [2024-04-26 12:21:15.713489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.943 [2024-04-26 12:21:15.713603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.943 [2024-04-26 12:21:15.713694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.943 [2024-04-26 12:21:15.713719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.943 [2024-04-26 12:21:15.713743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.943 [2024-04-26 12:21:15.713758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.943 [2024-04-26 12:21:15.713772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.943 [2024-04-26 12:21:15.713786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.943 [2024-04-26 12:21:15.713801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.943 [2024-04-26 12:21:15.713859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.943 [2024-04-26 12:21:15.713892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e0230 (9): Bad file descriptor 00:24:26.943 [2024-04-26 12:21:15.722038] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.943 Running I/O for 1 seconds... 00:24:26.943 00:24:26.943 Latency(us) 00:24:26.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:26.943 Verification LBA range: start 0x0 length 0x4000 00:24:26.943 NVMe0n1 : 1.01 6860.57 26.80 0.00 0.00 18583.16 2278.87 15728.64 00:24:26.943 =================================================================================================================== 00:24:26.943 Total : 6860.57 26.80 0.00 0.00 18583.16 2278.87 15728.64 00:24:26.943 12:21:20 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.943 12:21:20 -- host/failover.sh@95 -- # grep -q NVMe0 00:24:26.943 12:21:20 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.202 12:21:20 -- host/failover.sh@99 -- # grep -q NVMe0 00:24:27.202 12:21:20 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.461 12:21:20 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.720 12:21:21 -- host/failover.sh@101 -- # sleep 3 00:24:31.002 12:21:24 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:31.002 12:21:24 -- host/failover.sh@103 -- # grep -q NVMe0 00:24:31.002 12:21:24 -- host/failover.sh@108 -- # killprocess 72932 00:24:31.002 12:21:24 -- common/autotest_common.sh@936 -- # '[' -z 72932 ']' 00:24:31.002 12:21:24 -- common/autotest_common.sh@940 -- # kill -0 72932 00:24:31.002 12:21:24 -- common/autotest_common.sh@941 -- # uname 00:24:31.002 12:21:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.002 12:21:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72932 00:24:31.002 12:21:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:31.002 12:21:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:31.002 killing process with pid 72932 00:24:31.002 12:21:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72932' 00:24:31.002 12:21:24 -- common/autotest_common.sh@955 -- # kill 72932 00:24:31.002 12:21:24 -- common/autotest_common.sh@960 -- # wait 72932 00:24:31.258 12:21:24 -- host/failover.sh@110 -- # sync 00:24:31.516 12:21:24 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.516 12:21:24 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:31.516 12:21:24 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:31.516 12:21:24 -- host/failover.sh@116 -- # nvmftestfini 00:24:31.516 12:21:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:31.516 12:21:24 -- nvmf/common.sh@117 -- # sync 00:24:31.516 12:21:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.516 12:21:24 -- nvmf/common.sh@120 -- # set +e 00:24:31.516 12:21:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.516 12:21:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.516 rmmod nvme_tcp 00:24:31.774 rmmod nvme_fabrics 00:24:31.774 rmmod nvme_keyring 00:24:31.774 12:21:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.774 12:21:25 -- nvmf/common.sh@124 -- # set -e 00:24:31.774 12:21:25 -- nvmf/common.sh@125 -- # return 0 00:24:31.774 12:21:25 -- nvmf/common.sh@478 -- # '[' -n 72673 ']' 00:24:31.774 12:21:25 -- nvmf/common.sh@479 -- # killprocess 72673 00:24:31.774 12:21:25 -- common/autotest_common.sh@936 -- # '[' -z 72673 ']' 00:24:31.774 12:21:25 -- common/autotest_common.sh@940 -- # kill -0 72673 00:24:31.774 12:21:25 -- common/autotest_common.sh@941 -- # uname 00:24:31.774 12:21:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.774 12:21:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72673 00:24:31.774 12:21:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:31.774 12:21:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:31.774 12:21:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72673' 00:24:31.774 killing process with pid 72673 00:24:31.774 12:21:25 -- common/autotest_common.sh@955 -- # kill 72673 00:24:31.774 12:21:25 -- common/autotest_common.sh@960 -- # wait 72673 00:24:32.032 12:21:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:32.032 12:21:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:32.032 12:21:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:32.032 12:21:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.032 12:21:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.032 12:21:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.032 12:21:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.032 12:21:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.032 12:21:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:32.032 ************************************ 00:24:32.032 END TEST nvmf_failover 00:24:32.032 ************************************ 00:24:32.032 00:24:32.032 real 0m32.851s 00:24:32.032 user 2m7.295s 00:24:32.032 sys 0m5.652s 00:24:32.032 12:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:32.032 12:21:25 -- common/autotest_common.sh@10 -- # set +x 00:24:32.032 12:21:25 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.032 12:21:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:32.032 12:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:32.032 12:21:25 -- common/autotest_common.sh@10 -- # set +x 00:24:32.032 ************************************ 00:24:32.032 START TEST nvmf_discovery 00:24:32.032 ************************************ 00:24:32.032 12:21:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.291 * Looking for test storage... 00:24:32.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:32.291 12:21:25 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.291 12:21:25 -- nvmf/common.sh@7 -- # uname -s 00:24:32.291 12:21:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.291 12:21:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.291 12:21:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.291 12:21:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.291 12:21:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.291 12:21:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.291 12:21:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.291 12:21:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.291 12:21:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.291 12:21:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.291 12:21:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:24:32.291 12:21:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:24:32.291 12:21:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.291 12:21:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.291 12:21:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.291 12:21:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.291 12:21:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.291 12:21:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.291 12:21:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.291 12:21:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.291 12:21:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.291 12:21:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.291 12:21:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.291 12:21:25 -- paths/export.sh@5 -- # export PATH 00:24:32.291 12:21:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.291 12:21:25 -- nvmf/common.sh@47 -- # : 0 00:24:32.291 12:21:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.291 12:21:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.291 12:21:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.291 12:21:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.291 12:21:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.291 12:21:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.291 12:21:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.291 12:21:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.291 12:21:25 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:32.291 12:21:25 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:32.291 12:21:25 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:32.291 12:21:25 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:32.291 12:21:25 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:32.291 12:21:25 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:32.291 12:21:25 -- host/discovery.sh@25 -- # nvmftestinit 00:24:32.291 12:21:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:32.291 12:21:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.291 12:21:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:32.291 12:21:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:32.291 12:21:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:32.291 12:21:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.291 12:21:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.291 12:21:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.291 12:21:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:32.291 12:21:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:32.291 12:21:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:32.291 12:21:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:32.291 12:21:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:32.291 12:21:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:32.291 12:21:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.291 12:21:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.291 12:21:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:32.291 12:21:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:32.291 12:21:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.291 12:21:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.291 12:21:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.291 12:21:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.291 12:21:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.291 12:21:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.291 12:21:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.291 12:21:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.291 12:21:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:32.291 12:21:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:32.291 Cannot find device "nvmf_tgt_br" 00:24:32.291 12:21:25 -- nvmf/common.sh@155 -- # true 00:24:32.291 12:21:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.291 Cannot find device "nvmf_tgt_br2" 00:24:32.291 12:21:25 -- nvmf/common.sh@156 -- # true 00:24:32.291 12:21:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:32.291 12:21:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:32.291 Cannot find device "nvmf_tgt_br" 00:24:32.291 12:21:25 -- nvmf/common.sh@158 -- # true 00:24:32.291 12:21:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:32.291 Cannot find device "nvmf_tgt_br2" 00:24:32.291 12:21:25 -- nvmf/common.sh@159 -- # true 00:24:32.291 12:21:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:32.291 12:21:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:32.291 12:21:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.291 12:21:25 -- nvmf/common.sh@162 -- # true 00:24:32.291 12:21:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.291 12:21:25 -- nvmf/common.sh@163 -- # true 00:24:32.291 12:21:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:32.291 12:21:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:32.291 12:21:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:32.549 12:21:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:32.549 12:21:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.549 12:21:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.549 12:21:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.549 12:21:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:32.549 12:21:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:32.549 12:21:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:32.549 12:21:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:32.549 12:21:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:32.549 12:21:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:32.549 12:21:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.549 12:21:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.549 12:21:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.549 12:21:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:32.549 12:21:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:32.549 12:21:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.549 12:21:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.549 12:21:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.549 12:21:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.549 12:21:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.549 12:21:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:32.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:24:32.549 00:24:32.549 --- 10.0.0.2 ping statistics --- 00:24:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.549 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:32.549 12:21:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:32.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:32.549 00:24:32.549 --- 10.0.0.3 ping statistics --- 00:24:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.549 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:32.549 12:21:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:32.549 00:24:32.549 --- 10.0.0.1 ping statistics --- 00:24:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.549 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:32.549 12:21:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.549 12:21:25 -- nvmf/common.sh@422 -- # return 0 00:24:32.549 12:21:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:32.549 12:21:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.549 12:21:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:32.549 12:21:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:32.549 12:21:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.549 12:21:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:32.549 12:21:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:32.549 12:21:25 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:32.549 12:21:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:32.549 12:21:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:32.549 12:21:25 -- common/autotest_common.sh@10 -- # set +x 00:24:32.549 12:21:25 -- nvmf/common.sh@470 -- # nvmfpid=73282 00:24:32.549 12:21:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.549 12:21:25 -- nvmf/common.sh@471 -- # waitforlisten 73282 00:24:32.549 12:21:25 -- common/autotest_common.sh@817 -- # '[' -z 73282 ']' 00:24:32.549 12:21:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.549 12:21:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:32.549 12:21:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.549 12:21:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:32.549 12:21:25 -- common/autotest_common.sh@10 -- # set +x 00:24:32.549 [2024-04-26 12:21:26.016376] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:24:32.549 [2024-04-26 12:21:26.016485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.807 [2024-04-26 12:21:26.149551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.065 [2024-04-26 12:21:26.282796] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.065 [2024-04-26 12:21:26.282871] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.065 [2024-04-26 12:21:26.282897] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.065 [2024-04-26 12:21:26.282908] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.065 [2024-04-26 12:21:26.282917] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.065 [2024-04-26 12:21:26.282963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.632 12:21:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.632 12:21:26 -- common/autotest_common.sh@850 -- # return 0 00:24:33.632 12:21:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:33.632 12:21:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:33.632 12:21:26 -- common/autotest_common.sh@10 -- # set +x 00:24:33.632 12:21:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.632 12:21:27 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.632 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.632 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.632 [2024-04-26 12:21:27.029719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.632 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.632 12:21:27 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:33.632 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.632 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.632 [2024-04-26 12:21:27.037814] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:33.632 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.632 12:21:27 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:33.632 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.632 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.632 null0 00:24:33.632 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.632 12:21:27 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:33.632 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.632 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.632 null1 00:24:33.632 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.632 12:21:27 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:33.632 12:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.632 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.632 12:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.632 12:21:27 -- host/discovery.sh@45 -- # hostpid=73314 00:24:33.632 12:21:27 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:33.632 12:21:27 -- host/discovery.sh@46 -- # waitforlisten 73314 /tmp/host.sock 00:24:33.632 12:21:27 -- common/autotest_common.sh@817 -- # '[' -z 73314 ']' 00:24:33.632 12:21:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:33.632 12:21:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:33.632 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:33.632 12:21:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:33.632 12:21:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:33.632 12:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:33.889 [2024-04-26 12:21:27.122730] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:24:33.889 [2024-04-26 12:21:27.122848] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73314 ] 00:24:33.889 [2024-04-26 12:21:27.265920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.147 [2024-04-26 12:21:27.392779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.712 12:21:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:34.712 12:21:28 -- common/autotest_common.sh@850 -- # return 0 00:24:34.712 12:21:28 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.712 12:21:28 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:34.712 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.712 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.712 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.712 12:21:28 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:34.712 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.712 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.712 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.712 12:21:28 -- host/discovery.sh@72 -- # notify_id=0 00:24:34.712 12:21:28 -- host/discovery.sh@83 -- # get_subsystem_names 00:24:34.712 12:21:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:34.712 12:21:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:34.712 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.712 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.712 12:21:28 -- host/discovery.sh@59 -- # xargs 00:24:34.712 12:21:28 -- host/discovery.sh@59 -- # sort 00:24:34.712 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.712 12:21:28 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:34.712 12:21:28 -- host/discovery.sh@84 -- # get_bdev_list 00:24:34.712 12:21:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.712 12:21:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:34.712 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.712 12:21:28 -- host/discovery.sh@55 -- # sort 00:24:34.712 12:21:28 -- host/discovery.sh@55 -- # xargs 00:24:34.712 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.969 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.969 12:21:28 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:34.969 12:21:28 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:34.969 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.970 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.970 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.970 12:21:28 -- host/discovery.sh@87 -- # get_subsystem_names 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:34.970 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # sort 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # xargs 00:24:34.970 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.970 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.970 12:21:28 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:34.970 12:21:28 -- host/discovery.sh@88 -- # get_bdev_list 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.970 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:34.970 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # sort 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # xargs 00:24:34.970 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.970 12:21:28 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:34.970 12:21:28 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:34.970 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.970 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.970 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.970 12:21:28 -- host/discovery.sh@91 -- # get_subsystem_names 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:34.970 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # xargs 00:24:34.970 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.970 12:21:28 -- host/discovery.sh@59 -- # sort 00:24:34.970 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.970 12:21:28 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:34.970 12:21:28 -- host/discovery.sh@92 -- # get_bdev_list 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # sort 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.970 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.970 12:21:28 -- host/discovery.sh@55 -- # xargs 00:24:34.970 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:34.970 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.227 12:21:28 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:35.227 12:21:28 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:35.227 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.227 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.227 [2024-04-26 12:21:28.478285] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.227 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.227 12:21:28 -- host/discovery.sh@97 -- # get_subsystem_names 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.227 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.227 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # sort 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # xargs 00:24:35.227 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.227 12:21:28 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:35.227 12:21:28 -- host/discovery.sh@98 -- # get_bdev_list 00:24:35.227 12:21:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.227 12:21:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.227 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.227 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.227 12:21:28 -- host/discovery.sh@55 -- # sort 00:24:35.227 12:21:28 -- host/discovery.sh@55 -- # xargs 00:24:35.227 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.227 12:21:28 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:35.227 12:21:28 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:35.227 12:21:28 -- host/discovery.sh@79 -- # expected_count=0 00:24:35.227 12:21:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:35.227 12:21:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:35.227 12:21:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:35.227 12:21:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:35.227 12:21:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:35.227 12:21:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:35.227 12:21:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:35.227 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.227 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.227 12:21:28 -- host/discovery.sh@74 -- # jq '. | length' 00:24:35.227 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.227 12:21:28 -- host/discovery.sh@74 -- # notification_count=0 00:24:35.227 12:21:28 -- host/discovery.sh@75 -- # notify_id=0 00:24:35.227 12:21:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:35.227 12:21:28 -- common/autotest_common.sh@904 -- # return 0 00:24:35.227 12:21:28 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:35.227 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.227 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.227 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.227 12:21:28 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:35.227 12:21:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:35.227 12:21:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:35.227 12:21:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:35.227 12:21:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:35.227 12:21:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.227 12:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # sort 00:24:35.227 12:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.227 12:21:28 -- host/discovery.sh@59 -- # xargs 00:24:35.227 12:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.485 12:21:28 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:24:35.485 12:21:28 -- common/autotest_common.sh@906 -- # sleep 1 00:24:35.742 [2024-04-26 12:21:29.113067] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:35.742 [2024-04-26 12:21:29.113121] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:35.742 [2024-04-26 12:21:29.113155] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:35.742 [2024-04-26 12:21:29.119121] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:35.742 [2024-04-26 12:21:29.175731] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:35.742 [2024-04-26 12:21:29.175777] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:36.310 12:21:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.310 12:21:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.310 12:21:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:36.310 12:21:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.310 12:21:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.310 12:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.310 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:36.310 12:21:29 -- host/discovery.sh@59 -- # sort 00:24:36.310 12:21:29 -- host/discovery.sh@59 -- # xargs 00:24:36.310 12:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.310 12:21:29 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.310 12:21:29 -- common/autotest_common.sh@904 -- # return 0 00:24:36.310 12:21:29 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:36.310 12:21:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:36.310 12:21:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.310 12:21:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.310 12:21:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:36.310 12:21:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:36.310 12:21:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.310 12:21:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.310 12:21:29 -- host/discovery.sh@55 -- # sort 00:24:36.310 12:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.310 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:36.310 12:21:29 -- host/discovery.sh@55 -- # xargs 00:24:36.569 12:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:36.569 12:21:29 -- common/autotest_common.sh@904 -- # return 0 00:24:36.569 12:21:29 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:36.569 12:21:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:36.569 12:21:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.569 12:21:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:36.569 12:21:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:36.569 12:21:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:36.569 12:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.569 12:21:29 -- host/discovery.sh@63 -- # sort -n 00:24:36.569 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:36.569 12:21:29 -- host/discovery.sh@63 -- # xargs 00:24:36.569 12:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:24:36.569 12:21:29 -- common/autotest_common.sh@904 -- # return 0 00:24:36.569 12:21:29 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:36.569 12:21:29 -- host/discovery.sh@79 -- # expected_count=1 00:24:36.569 12:21:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.569 12:21:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.569 12:21:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.569 12:21:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:36.569 12:21:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:36.569 12:21:29 -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.569 12:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.569 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:36.569 12:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.569 12:21:29 -- host/discovery.sh@74 -- # notification_count=1 00:24:36.569 12:21:29 -- host/discovery.sh@75 -- # notify_id=1 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:36.569 12:21:29 -- common/autotest_common.sh@904 -- # return 0 00:24:36.569 12:21:29 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:36.569 12:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.569 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:36.569 12:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.569 12:21:29 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.569 12:21:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.569 12:21:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.569 12:21:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:36.569 12:21:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:36.569 12:21:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.569 12:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.569 12:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:36.569 12:21:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.569 12:21:29 -- host/discovery.sh@55 -- # sort 00:24:36.569 12:21:29 -- host/discovery.sh@55 -- # xargs 00:24:36.569 12:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.569 12:21:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:36.569 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:36.569 12:21:30 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:36.569 12:21:30 -- host/discovery.sh@79 -- # expected_count=1 00:24:36.569 12:21:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.569 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.569 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.569 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.569 12:21:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.569 12:21:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:36.569 12:21:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:36.569 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.569 12:21:30 -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.569 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.569 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.846 12:21:30 -- host/discovery.sh@74 -- # notification_count=1 00:24:36.846 12:21:30 -- host/discovery.sh@75 -- # notify_id=2 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:36.846 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:36.846 12:21:30 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:36.846 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.846 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.846 [2024-04-26 12:21:30.068187] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.846 [2024-04-26 12:21:30.069124] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:36.846 [2024-04-26 12:21:30.069166] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.846 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.846 12:21:30 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.846 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.846 [2024-04-26 12:21:30.075111] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:36.846 12:21:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.846 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.846 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.846 12:21:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.846 12:21:30 -- host/discovery.sh@59 -- # xargs 00:24:36.846 12:21:30 -- host/discovery.sh@59 -- # sort 00:24:36.846 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.846 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:36.846 12:21:30 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.846 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:36.846 12:21:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.846 12:21:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.846 12:21:30 -- host/discovery.sh@55 -- # xargs 00:24:36.846 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.846 12:21:30 -- host/discovery.sh@55 -- # sort 00:24:36.846 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.846 [2024-04-26 12:21:30.135422] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:36.846 [2024-04-26 12:21:30.135453] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:36.846 [2024-04-26 12:21:30.135461] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:36.846 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:36.846 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:36.846 12:21:30 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.846 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:36.846 12:21:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:36.846 12:21:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:36.846 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.846 12:21:30 -- host/discovery.sh@63 -- # sort -n 00:24:36.846 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.846 12:21:30 -- host/discovery.sh@63 -- # xargs 00:24:36.846 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:36.846 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:36.846 12:21:30 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:36.846 12:21:30 -- host/discovery.sh@79 -- # expected_count=0 00:24:36.846 12:21:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.846 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.846 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:36.846 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:36.846 12:21:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:36.846 12:21:30 -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.846 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.846 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.846 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.846 12:21:30 -- host/discovery.sh@74 -- # notification_count=0 00:24:36.846 12:21:30 -- host/discovery.sh@75 -- # notify_id=2 00:24:36.846 12:21:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:36.846 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:36.846 12:21:30 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:36.846 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.846 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.846 [2024-04-26 12:21:30.297400] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:36.846 [2024-04-26 12:21:30.297439] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.137 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.137 12:21:30 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:37.137 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:37.137 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.137 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.137 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:37.137 [2024-04-26 12:21:30.302745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.137 [2024-04-26 12:21:30.302802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.137 [2024-04-26 12:21:30.302817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.137 [2024-04-26 12:21:30.302827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.137 [2024-04-26 12:21:30.302837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.137 [2024-04-26 12:21:30.302846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.138 [2024-04-26 12:21:30.302856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.138 [2024-04-26 12:21:30.302865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.138 [2024-04-26 12:21:30.302874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1239000 is same with the state(5) to be set 00:24:37.138 [2024-04-26 12:21:30.303403] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:37.138 [2024-04-26 12:21:30.303444] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:37.138 [2024-04-26 12:21:30.303521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1239000 (9): Bad file descriptor 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # sort 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # xargs 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.138 12:21:30 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.138 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # sort 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # xargs 00:24:37.138 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.138 12:21:30 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.138 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:37.138 12:21:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:37.138 12:21:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- host/discovery.sh@63 -- # sort -n 00:24:37.138 12:21:30 -- host/discovery.sh@63 -- # xargs 00:24:37.138 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.138 12:21:30 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:37.138 12:21:30 -- host/discovery.sh@79 -- # expected_count=0 00:24:37.138 12:21:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:37.138 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:37.138 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.138 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:37.138 12:21:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- host/discovery.sh@74 -- # jq '. | length' 00:24:37.138 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.138 12:21:30 -- host/discovery.sh@74 -- # notification_count=0 00:24:37.138 12:21:30 -- host/discovery.sh@75 -- # notify_id=2 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:37.138 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.138 12:21:30 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.138 12:21:30 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.138 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # sort 00:24:37.138 12:21:30 -- host/discovery.sh@59 -- # xargs 00:24:37.138 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:24:37.138 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.138 12:21:30 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.138 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:37.138 12:21:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.138 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # sort 00:24:37.138 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.138 12:21:30 -- host/discovery.sh@55 -- # xargs 00:24:37.397 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.397 12:21:30 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:24:37.397 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.397 12:21:30 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:37.397 12:21:30 -- host/discovery.sh@79 -- # expected_count=2 00:24:37.397 12:21:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:37.397 12:21:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:37.397 12:21:30 -- common/autotest_common.sh@901 -- # local max=10 00:24:37.397 12:21:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:37.397 12:21:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:37.397 12:21:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:37.397 12:21:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:37.397 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.397 12:21:30 -- host/discovery.sh@74 -- # jq '. | length' 00:24:37.397 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:37.397 12:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.397 12:21:30 -- host/discovery.sh@74 -- # notification_count=2 00:24:37.397 12:21:30 -- host/discovery.sh@75 -- # notify_id=4 00:24:37.397 12:21:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:37.397 12:21:30 -- common/autotest_common.sh@904 -- # return 0 00:24:37.397 12:21:30 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:37.397 12:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.397 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:38.332 [2024-04-26 12:21:31.717643] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:38.332 [2024-04-26 12:21:31.717693] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:38.332 [2024-04-26 12:21:31.717727] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.333 [2024-04-26 12:21:31.723698] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:38.333 [2024-04-26 12:21:31.783388] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:38.333 [2024-04-26 12:21:31.783448] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:38.333 12:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.333 12:21:31 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.333 12:21:31 -- common/autotest_common.sh@638 -- # local es=0 00:24:38.333 12:21:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.333 12:21:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:38.333 12:21:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:38.333 12:21:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:38.333 12:21:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:38.333 12:21:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.333 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.333 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.333 request: 00:24:38.333 { 00:24:38.591 "name": "nvme", 00:24:38.591 "trtype": "tcp", 00:24:38.591 "traddr": "10.0.0.2", 00:24:38.591 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:38.591 "adrfam": "ipv4", 00:24:38.591 "trsvcid": "8009", 00:24:38.591 "wait_for_attach": true, 00:24:38.591 "method": "bdev_nvme_start_discovery", 00:24:38.591 "req_id": 1 00:24:38.591 } 00:24:38.591 Got JSON-RPC error response 00:24:38.591 response: 00:24:38.591 { 00:24:38.591 "code": -17, 00:24:38.591 "message": "File exists" 00:24:38.591 } 00:24:38.591 12:21:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:38.591 12:21:31 -- common/autotest_common.sh@641 -- # es=1 00:24:38.591 12:21:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:38.591 12:21:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:38.591 12:21:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:38.591 12:21:31 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:38.591 12:21:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:38.591 12:21:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:38.591 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.591 12:21:31 -- host/discovery.sh@67 -- # xargs 00:24:38.591 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.591 12:21:31 -- host/discovery.sh@67 -- # sort 00:24:38.591 12:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.591 12:21:31 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:38.591 12:21:31 -- host/discovery.sh@146 -- # get_bdev_list 00:24:38.591 12:21:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.591 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.591 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.591 12:21:31 -- host/discovery.sh@55 -- # xargs 00:24:38.591 12:21:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.591 12:21:31 -- host/discovery.sh@55 -- # sort 00:24:38.591 12:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.591 12:21:31 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.591 12:21:31 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.591 12:21:31 -- common/autotest_common.sh@638 -- # local es=0 00:24:38.592 12:21:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.592 12:21:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:38.592 12:21:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:38.592 12:21:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:38.592 12:21:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:38.592 12:21:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.592 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.592 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.592 request: 00:24:38.592 { 00:24:38.592 "name": "nvme_second", 00:24:38.592 "trtype": "tcp", 00:24:38.592 "traddr": "10.0.0.2", 00:24:38.592 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:38.592 "adrfam": "ipv4", 00:24:38.592 "trsvcid": "8009", 00:24:38.592 "wait_for_attach": true, 00:24:38.592 "method": "bdev_nvme_start_discovery", 00:24:38.592 "req_id": 1 00:24:38.592 } 00:24:38.592 Got JSON-RPC error response 00:24:38.592 response: 00:24:38.592 { 00:24:38.592 "code": -17, 00:24:38.592 "message": "File exists" 00:24:38.592 } 00:24:38.592 12:21:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:38.592 12:21:31 -- common/autotest_common.sh@641 -- # es=1 00:24:38.592 12:21:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:38.592 12:21:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:38.592 12:21:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:38.592 12:21:31 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:38.592 12:21:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:38.592 12:21:31 -- host/discovery.sh@67 -- # sort 00:24:38.592 12:21:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:38.592 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.592 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.592 12:21:31 -- host/discovery.sh@67 -- # xargs 00:24:38.592 12:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.592 12:21:31 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:38.592 12:21:31 -- host/discovery.sh@152 -- # get_bdev_list 00:24:38.592 12:21:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.592 12:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.592 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.592 12:21:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.592 12:21:31 -- host/discovery.sh@55 -- # sort 00:24:38.592 12:21:31 -- host/discovery.sh@55 -- # xargs 00:24:38.592 12:21:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.592 12:21:32 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.592 12:21:32 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:38.592 12:21:32 -- common/autotest_common.sh@638 -- # local es=0 00:24:38.592 12:21:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:38.592 12:21:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:38.592 12:21:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:38.592 12:21:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:38.592 12:21:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:38.592 12:21:32 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:38.592 12:21:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.592 12:21:32 -- common/autotest_common.sh@10 -- # set +x 00:24:40.003 [2024-04-26 12:21:33.041036] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.003 [2024-04-26 12:21:33.041157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.003 [2024-04-26 12:21:33.041218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.003 [2024-04-26 12:21:33.041237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293990 with addr=10.0.0.2, port=8010 00:24:40.003 [2024-04-26 12:21:33.041260] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:40.003 [2024-04-26 12:21:33.041271] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:40.003 [2024-04-26 12:21:33.041280] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:40.595 [2024-04-26 12:21:34.041060] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.595 [2024-04-26 12:21:34.041191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.595 [2024-04-26 12:21:34.041238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.595 [2024-04-26 12:21:34.041255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293990 with addr=10.0.0.2, port=8010 00:24:40.596 [2024-04-26 12:21:34.041278] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:40.596 [2024-04-26 12:21:34.041288] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:40.596 [2024-04-26 12:21:34.041298] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:41.971 [2024-04-26 12:21:35.040897] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:41.971 request: 00:24:41.971 { 00:24:41.971 "name": "nvme_second", 00:24:41.971 "trtype": "tcp", 00:24:41.971 "traddr": "10.0.0.2", 00:24:41.971 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:41.971 "adrfam": "ipv4", 00:24:41.971 "trsvcid": "8010", 00:24:41.971 "attach_timeout_ms": 3000, 00:24:41.971 "method": "bdev_nvme_start_discovery", 00:24:41.971 "req_id": 1 00:24:41.971 } 00:24:41.971 Got JSON-RPC error response 00:24:41.971 response: 00:24:41.971 { 00:24:41.971 "code": -110, 00:24:41.971 "message": "Connection timed out" 00:24:41.971 } 00:24:41.971 12:21:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:41.971 12:21:35 -- common/autotest_common.sh@641 -- # es=1 00:24:41.971 12:21:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:41.971 12:21:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:41.971 12:21:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:41.971 12:21:35 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:41.971 12:21:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:41.971 12:21:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:41.971 12:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.971 12:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:41.971 12:21:35 -- host/discovery.sh@67 -- # sort 00:24:41.971 12:21:35 -- host/discovery.sh@67 -- # xargs 00:24:41.971 12:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.971 12:21:35 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:41.971 12:21:35 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:41.971 12:21:35 -- host/discovery.sh@161 -- # kill 73314 00:24:41.971 12:21:35 -- host/discovery.sh@162 -- # nvmftestfini 00:24:41.971 12:21:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:41.971 12:21:35 -- nvmf/common.sh@117 -- # sync 00:24:41.971 12:21:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.971 12:21:35 -- nvmf/common.sh@120 -- # set +e 00:24:41.971 12:21:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.971 12:21:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.971 rmmod nvme_tcp 00:24:41.971 rmmod nvme_fabrics 00:24:41.971 rmmod nvme_keyring 00:24:41.971 12:21:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.971 12:21:35 -- nvmf/common.sh@124 -- # set -e 00:24:41.971 12:21:35 -- nvmf/common.sh@125 -- # return 0 00:24:41.971 12:21:35 -- nvmf/common.sh@478 -- # '[' -n 73282 ']' 00:24:41.971 12:21:35 -- nvmf/common.sh@479 -- # killprocess 73282 00:24:41.971 12:21:35 -- common/autotest_common.sh@936 -- # '[' -z 73282 ']' 00:24:41.971 12:21:35 -- common/autotest_common.sh@940 -- # kill -0 73282 00:24:41.971 12:21:35 -- common/autotest_common.sh@941 -- # uname 00:24:41.971 12:21:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:41.971 12:21:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73282 00:24:41.971 12:21:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:41.971 12:21:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:41.971 killing process with pid 73282 00:24:41.971 12:21:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73282' 00:24:41.971 12:21:35 -- common/autotest_common.sh@955 -- # kill 73282 00:24:41.971 12:21:35 -- common/autotest_common.sh@960 -- # wait 73282 00:24:42.229 12:21:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:42.229 12:21:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:42.229 12:21:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:42.229 12:21:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.229 12:21:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.229 12:21:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.229 12:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.229 12:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.229 12:21:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:42.229 00:24:42.229 real 0m10.031s 00:24:42.229 user 0m19.329s 00:24:42.229 sys 0m1.921s 00:24:42.229 12:21:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:42.229 12:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:42.230 ************************************ 00:24:42.230 END TEST nvmf_discovery 00:24:42.230 ************************************ 00:24:42.230 12:21:35 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:42.230 12:21:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:42.230 12:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:42.230 12:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:42.230 ************************************ 00:24:42.230 START TEST nvmf_discovery_remove_ifc 00:24:42.230 ************************************ 00:24:42.230 12:21:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:42.489 * Looking for test storage... 00:24:42.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:42.489 12:21:35 -- nvmf/common.sh@7 -- # uname -s 00:24:42.489 12:21:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.489 12:21:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.489 12:21:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.489 12:21:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.489 12:21:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.489 12:21:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.489 12:21:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.489 12:21:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.489 12:21:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.489 12:21:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.489 12:21:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:24:42.489 12:21:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:24:42.489 12:21:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.489 12:21:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.489 12:21:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:42.489 12:21:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.489 12:21:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:42.489 12:21:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.489 12:21:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.489 12:21:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.489 12:21:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.489 12:21:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.489 12:21:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.489 12:21:35 -- paths/export.sh@5 -- # export PATH 00:24:42.489 12:21:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.489 12:21:35 -- nvmf/common.sh@47 -- # : 0 00:24:42.489 12:21:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.489 12:21:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.489 12:21:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.489 12:21:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.489 12:21:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.489 12:21:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.489 12:21:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.489 12:21:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:42.489 12:21:35 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:42.489 12:21:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:42.489 12:21:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.489 12:21:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:42.489 12:21:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:42.489 12:21:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:42.489 12:21:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.489 12:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.489 12:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.489 12:21:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:42.489 12:21:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:42.489 12:21:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:42.489 12:21:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:42.489 12:21:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:42.489 12:21:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:42.489 12:21:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.489 12:21:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.489 12:21:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:42.490 12:21:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:42.490 12:21:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:42.490 12:21:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:42.490 12:21:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:42.490 12:21:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.490 12:21:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:42.490 12:21:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:42.490 12:21:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:42.490 12:21:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:42.490 12:21:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:42.490 12:21:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:42.490 Cannot find device "nvmf_tgt_br" 00:24:42.490 12:21:35 -- nvmf/common.sh@155 -- # true 00:24:42.490 12:21:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:42.490 Cannot find device "nvmf_tgt_br2" 00:24:42.490 12:21:35 -- nvmf/common.sh@156 -- # true 00:24:42.490 12:21:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:42.490 12:21:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:42.490 Cannot find device "nvmf_tgt_br" 00:24:42.490 12:21:35 -- nvmf/common.sh@158 -- # true 00:24:42.490 12:21:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:42.490 Cannot find device "nvmf_tgt_br2" 00:24:42.490 12:21:35 -- nvmf/common.sh@159 -- # true 00:24:42.490 12:21:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:42.490 12:21:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:42.490 12:21:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:42.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:42.490 12:21:35 -- nvmf/common.sh@162 -- # true 00:24:42.490 12:21:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:42.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:42.490 12:21:35 -- nvmf/common.sh@163 -- # true 00:24:42.490 12:21:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:42.490 12:21:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:42.490 12:21:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:42.490 12:21:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:42.490 12:21:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:42.490 12:21:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:42.490 12:21:35 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:42.490 12:21:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:42.490 12:21:35 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:42.748 12:21:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:42.748 12:21:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:42.748 12:21:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:42.748 12:21:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:42.748 12:21:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:42.748 12:21:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:42.748 12:21:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:42.748 12:21:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:42.748 12:21:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:42.748 12:21:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:42.748 12:21:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:42.748 12:21:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:42.748 12:21:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:42.748 12:21:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:42.748 12:21:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:42.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:24:42.748 00:24:42.748 --- 10.0.0.2 ping statistics --- 00:24:42.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.748 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:24:42.748 12:21:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:42.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:42.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:24:42.748 00:24:42.748 --- 10.0.0.3 ping statistics --- 00:24:42.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.748 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:42.748 12:21:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:42.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:24:42.748 00:24:42.748 --- 10.0.0.1 ping statistics --- 00:24:42.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.748 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:42.748 12:21:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.748 12:21:36 -- nvmf/common.sh@422 -- # return 0 00:24:42.748 12:21:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:42.748 12:21:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.748 12:21:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:42.748 12:21:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:42.748 12:21:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.748 12:21:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:42.748 12:21:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:42.748 12:21:36 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:42.748 12:21:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:42.748 12:21:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:42.748 12:21:36 -- common/autotest_common.sh@10 -- # set +x 00:24:42.748 12:21:36 -- nvmf/common.sh@470 -- # nvmfpid=73773 00:24:42.748 12:21:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:42.748 12:21:36 -- nvmf/common.sh@471 -- # waitforlisten 73773 00:24:42.748 12:21:36 -- common/autotest_common.sh@817 -- # '[' -z 73773 ']' 00:24:42.748 12:21:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.748 12:21:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:42.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.748 12:21:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.748 12:21:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:42.748 12:21:36 -- common/autotest_common.sh@10 -- # set +x 00:24:42.748 [2024-04-26 12:21:36.161404] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:24:42.748 [2024-04-26 12:21:36.161505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.007 [2024-04-26 12:21:36.300496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.007 [2024-04-26 12:21:36.427747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.007 [2024-04-26 12:21:36.427805] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.007 [2024-04-26 12:21:36.427819] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.007 [2024-04-26 12:21:36.427829] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.007 [2024-04-26 12:21:36.427839] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.007 [2024-04-26 12:21:36.427885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.941 12:21:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:43.941 12:21:37 -- common/autotest_common.sh@850 -- # return 0 00:24:43.941 12:21:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:43.941 12:21:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:43.941 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:24:43.941 12:21:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.941 12:21:37 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:43.941 12:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.941 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:24:43.941 [2024-04-26 12:21:37.147104] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.941 [2024-04-26 12:21:37.155252] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:43.941 null0 00:24:43.941 [2024-04-26 12:21:37.187171] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.941 12:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.941 12:21:37 -- host/discovery_remove_ifc.sh@59 -- # hostpid=73805 00:24:43.941 12:21:37 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:43.941 12:21:37 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 73805 /tmp/host.sock 00:24:43.941 12:21:37 -- common/autotest_common.sh@817 -- # '[' -z 73805 ']' 00:24:43.941 12:21:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:43.942 12:21:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:43.942 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:43.942 12:21:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:43.942 12:21:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:43.942 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:24:43.942 [2024-04-26 12:21:37.269167] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:24:43.942 [2024-04-26 12:21:37.269294] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73805 ] 00:24:44.200 [2024-04-26 12:21:37.411103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.200 [2024-04-26 12:21:37.524745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.767 12:21:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:44.767 12:21:38 -- common/autotest_common.sh@850 -- # return 0 00:24:44.767 12:21:38 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.767 12:21:38 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:44.767 12:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.767 12:21:38 -- common/autotest_common.sh@10 -- # set +x 00:24:45.025 12:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.025 12:21:38 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:45.025 12:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.025 12:21:38 -- common/autotest_common.sh@10 -- # set +x 00:24:45.025 12:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.025 12:21:38 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:45.025 12:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.025 12:21:38 -- common/autotest_common.sh@10 -- # set +x 00:24:45.958 [2024-04-26 12:21:39.350393] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:45.958 [2024-04-26 12:21:39.350441] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:45.958 [2024-04-26 12:21:39.350460] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:45.958 [2024-04-26 12:21:39.356438] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:45.958 [2024-04-26 12:21:39.412702] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:45.958 [2024-04-26 12:21:39.412774] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:45.958 [2024-04-26 12:21:39.412801] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:45.958 [2024-04-26 12:21:39.412818] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:45.958 [2024-04-26 12:21:39.412845] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:45.958 12:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.958 12:21:39 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:45.958 12:21:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.958 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.958 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.958 12:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.958 12:21:39 -- common/autotest_common.sh@10 -- # set +x 00:24:45.958 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.958 [2024-04-26 12:21:39.418985] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17230f0 was disconnected and freed. delete nvme_qpair. 00:24:45.958 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.216 12:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.216 12:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.216 12:21:39 -- common/autotest_common.sh@10 -- # set +x 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.216 12:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:46.216 12:21:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.150 12:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.150 12:21:40 -- common/autotest_common.sh@10 -- # set +x 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.150 12:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:47.150 12:21:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.525 12:21:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.525 12:21:41 -- common/autotest_common.sh@10 -- # set +x 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.525 12:21:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:48.525 12:21:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.460 12:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.460 12:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.460 12:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:49.460 12:21:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.392 12:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.392 12:21:43 -- common/autotest_common.sh@10 -- # set +x 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.392 12:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:50.392 12:21:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.764 12:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.764 12:21:44 -- common/autotest_common.sh@10 -- # set +x 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.764 12:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.764 [2024-04-26 12:21:44.840628] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:51.764 [2024-04-26 12:21:44.840690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.764 [2024-04-26 12:21:44.840706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.764 [2024-04-26 12:21:44.840720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.764 [2024-04-26 12:21:44.840730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.764 [2024-04-26 12:21:44.840740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.764 [2024-04-26 12:21:44.840749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.764 [2024-04-26 12:21:44.840760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.764 [2024-04-26 12:21:44.840769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.764 [2024-04-26 12:21:44.840779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.764 [2024-04-26 12:21:44.840788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.764 [2024-04-26 12:21:44.840797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1691fd0 is same with the state(5) to be set 00:24:51.764 [2024-04-26 12:21:44.850622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691fd0 (9): Bad file descriptor 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:51.764 12:21:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.764 [2024-04-26 12:21:44.860662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:52.701 12:21:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.701 12:21:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.701 12:21:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.701 12:21:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.701 12:21:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.701 12:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.701 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:24:52.701 [2024-04-26 12:21:45.889266] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:24:53.636 [2024-04-26 12:21:46.913312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:54.571 [2024-04-26 12:21:47.937321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:54.571 [2024-04-26 12:21:47.937473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1691fd0 with addr=10.0.0.2, port=4420 00:24:54.571 [2024-04-26 12:21:47.937510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1691fd0 is same with the state(5) to be set 00:24:54.571 [2024-04-26 12:21:47.938425] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691fd0 (9): Bad file descriptor 00:24:54.571 [2024-04-26 12:21:47.938502] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.571 [2024-04-26 12:21:47.938554] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:54.571 [2024-04-26 12:21:47.938642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.571 [2024-04-26 12:21:47.938687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.571 [2024-04-26 12:21:47.938715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.571 [2024-04-26 12:21:47.938737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.571 [2024-04-26 12:21:47.938759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.571 [2024-04-26 12:21:47.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.571 [2024-04-26 12:21:47.938801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.571 [2024-04-26 12:21:47.938821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.571 [2024-04-26 12:21:47.938843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.572 [2024-04-26 12:21:47.938863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.572 [2024-04-26 12:21:47.938883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:54.572 [2024-04-26 12:21:47.938945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691890 (9): Bad file descriptor 00:24:54.572 [2024-04-26 12:21:47.939947] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:54.572 [2024-04-26 12:21:47.940012] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:54.572 12:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.572 12:21:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.572 12:21:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.506 12:21:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.506 12:21:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.506 12:21:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.506 12:21:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.506 12:21:48 -- common/autotest_common.sh@10 -- # set +x 00:24:55.764 12:21:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.764 12:21:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.764 12:21:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.764 12:21:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.765 12:21:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.765 12:21:49 -- common/autotest_common.sh@10 -- # set +x 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.765 12:21:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:55.765 12:21:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.699 [2024-04-26 12:21:49.946136] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:56.699 [2024-04-26 12:21:49.946195] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:56.699 [2024-04-26 12:21:49.946217] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:56.699 [2024-04-26 12:21:49.952188] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:56.699 [2024-04-26 12:21:50.007607] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:56.699 [2024-04-26 12:21:50.007672] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:56.699 [2024-04-26 12:21:50.007699] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:56.699 [2024-04-26 12:21:50.007715] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:56.699 [2024-04-26 12:21:50.007725] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:56.699 [2024-04-26 12:21:50.014720] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16f3b90 was disconnected and freed. delete nvme_qpair. 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.699 12:21:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.699 12:21:50 -- common/autotest_common.sh@10 -- # set +x 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.699 12:21:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:56.699 12:21:50 -- host/discovery_remove_ifc.sh@90 -- # killprocess 73805 00:24:56.699 12:21:50 -- common/autotest_common.sh@936 -- # '[' -z 73805 ']' 00:24:56.699 12:21:50 -- common/autotest_common.sh@940 -- # kill -0 73805 00:24:56.699 12:21:50 -- common/autotest_common.sh@941 -- # uname 00:24:56.699 12:21:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:56.699 12:21:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73805 00:24:56.957 12:21:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:56.957 12:21:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:56.957 killing process with pid 73805 00:24:56.957 12:21:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73805' 00:24:56.957 12:21:50 -- common/autotest_common.sh@955 -- # kill 73805 00:24:56.957 12:21:50 -- common/autotest_common.sh@960 -- # wait 73805 00:24:56.957 12:21:50 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:56.957 12:21:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:56.957 12:21:50 -- nvmf/common.sh@117 -- # sync 00:24:57.215 12:21:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.215 12:21:50 -- nvmf/common.sh@120 -- # set +e 00:24:57.215 12:21:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.215 12:21:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.215 rmmod nvme_tcp 00:24:57.215 rmmod nvme_fabrics 00:24:57.215 rmmod nvme_keyring 00:24:57.215 12:21:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.215 12:21:50 -- nvmf/common.sh@124 -- # set -e 00:24:57.215 12:21:50 -- nvmf/common.sh@125 -- # return 0 00:24:57.215 12:21:50 -- nvmf/common.sh@478 -- # '[' -n 73773 ']' 00:24:57.215 12:21:50 -- nvmf/common.sh@479 -- # killprocess 73773 00:24:57.215 12:21:50 -- common/autotest_common.sh@936 -- # '[' -z 73773 ']' 00:24:57.215 12:21:50 -- common/autotest_common.sh@940 -- # kill -0 73773 00:24:57.215 12:21:50 -- common/autotest_common.sh@941 -- # uname 00:24:57.215 12:21:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.215 12:21:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73773 00:24:57.215 killing process with pid 73773 00:24:57.215 12:21:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:57.215 12:21:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:57.215 12:21:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73773' 00:24:57.215 12:21:50 -- common/autotest_common.sh@955 -- # kill 73773 00:24:57.215 12:21:50 -- common/autotest_common.sh@960 -- # wait 73773 00:24:57.473 12:21:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:57.473 12:21:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:57.473 12:21:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:57.473 12:21:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.473 12:21:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.473 12:21:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.473 12:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.473 12:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.473 12:21:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:57.473 00:24:57.473 real 0m15.201s 00:24:57.473 user 0m24.265s 00:24:57.473 sys 0m2.644s 00:24:57.473 12:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:57.473 12:21:50 -- common/autotest_common.sh@10 -- # set +x 00:24:57.473 ************************************ 00:24:57.473 END TEST nvmf_discovery_remove_ifc 00:24:57.473 ************************************ 00:24:57.473 12:21:50 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:57.473 12:21:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:57.473 12:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.473 12:21:50 -- common/autotest_common.sh@10 -- # set +x 00:24:57.735 ************************************ 00:24:57.735 START TEST nvmf_identify_kernel_target 00:24:57.735 ************************************ 00:24:57.735 12:21:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:57.735 * Looking for test storage... 00:24:57.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:57.735 12:21:51 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:57.735 12:21:51 -- nvmf/common.sh@7 -- # uname -s 00:24:57.735 12:21:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.735 12:21:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.735 12:21:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.735 12:21:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.735 12:21:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.735 12:21:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.735 12:21:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.735 12:21:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.735 12:21:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.735 12:21:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.735 12:21:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:24:57.735 12:21:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:24:57.735 12:21:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.735 12:21:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.735 12:21:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:57.735 12:21:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.735 12:21:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:57.735 12:21:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.736 12:21:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.736 12:21:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.736 12:21:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.736 12:21:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.736 12:21:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.736 12:21:51 -- paths/export.sh@5 -- # export PATH 00:24:57.736 12:21:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.736 12:21:51 -- nvmf/common.sh@47 -- # : 0 00:24:57.736 12:21:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.736 12:21:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.736 12:21:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.736 12:21:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.736 12:21:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.736 12:21:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.736 12:21:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.736 12:21:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.736 12:21:51 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:57.736 12:21:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:57.736 12:21:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.736 12:21:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:57.736 12:21:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:57.736 12:21:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:57.736 12:21:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.736 12:21:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.736 12:21:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.736 12:21:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:57.736 12:21:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:57.736 12:21:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:57.736 12:21:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:57.736 12:21:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:57.736 12:21:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:57.736 12:21:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.736 12:21:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.736 12:21:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:57.736 12:21:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:57.736 12:21:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:57.736 12:21:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:57.736 12:21:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:57.736 12:21:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.736 12:21:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:57.736 12:21:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:57.736 12:21:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:57.736 12:21:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:57.736 12:21:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:57.736 12:21:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:57.736 Cannot find device "nvmf_tgt_br" 00:24:57.736 12:21:51 -- nvmf/common.sh@155 -- # true 00:24:57.736 12:21:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:57.736 Cannot find device "nvmf_tgt_br2" 00:24:57.736 12:21:51 -- nvmf/common.sh@156 -- # true 00:24:57.736 12:21:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:57.736 12:21:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:57.736 Cannot find device "nvmf_tgt_br" 00:24:57.736 12:21:51 -- nvmf/common.sh@158 -- # true 00:24:57.736 12:21:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:57.736 Cannot find device "nvmf_tgt_br2" 00:24:57.736 12:21:51 -- nvmf/common.sh@159 -- # true 00:24:57.736 12:21:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:57.736 12:21:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:57.736 12:21:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:57.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.736 12:21:51 -- nvmf/common.sh@162 -- # true 00:24:57.736 12:21:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:57.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.736 12:21:51 -- nvmf/common.sh@163 -- # true 00:24:57.736 12:21:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:57.736 12:21:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:57.736 12:21:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:57.736 12:21:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:57.995 12:21:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:57.995 12:21:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:57.995 12:21:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:57.995 12:21:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:57.995 12:21:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:57.995 12:21:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:57.995 12:21:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:57.995 12:21:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:57.995 12:21:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:57.995 12:21:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:57.995 12:21:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:57.995 12:21:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:57.995 12:21:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:57.995 12:21:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:57.995 12:21:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:57.995 12:21:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:57.995 12:21:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:57.995 12:21:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:57.995 12:21:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:57.995 12:21:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:57.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:57.995 00:24:57.995 --- 10.0.0.2 ping statistics --- 00:24:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.995 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:57.995 12:21:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:57.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:57.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:24:57.995 00:24:57.995 --- 10.0.0.3 ping statistics --- 00:24:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.995 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:57.995 12:21:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:57.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:24:57.995 00:24:57.995 --- 10.0.0.1 ping statistics --- 00:24:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.995 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:57.995 12:21:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.995 12:21:51 -- nvmf/common.sh@422 -- # return 0 00:24:57.995 12:21:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:57.995 12:21:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.995 12:21:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.995 12:21:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:57.995 12:21:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:57.995 12:21:51 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:57.995 12:21:51 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:57.995 12:21:51 -- nvmf/common.sh@717 -- # local ip 00:24:57.995 12:21:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:57.995 12:21:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:57.995 12:21:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.995 12:21:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.995 12:21:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:57.995 12:21:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:57.995 12:21:51 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:57.995 12:21:51 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:57.995 12:21:51 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:57.995 12:21:51 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:57.995 12:21:51 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.995 12:21:51 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:57.995 12:21:51 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:57.995 12:21:51 -- nvmf/common.sh@628 -- # local block nvme 00:24:57.995 12:21:51 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:57.995 12:21:51 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:57.995 12:21:51 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:58.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:58.564 Waiting for block devices as requested 00:24:58.564 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:58.564 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:58.564 12:21:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:58.564 12:21:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:58.564 12:21:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:58.564 12:21:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:58.564 12:21:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:58.564 12:21:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:58.564 12:21:52 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:58.564 12:21:52 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:58.564 12:21:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:58.823 No valid GPT data, bailing 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # pt= 00:24:58.823 12:21:52 -- scripts/common.sh@392 -- # return 1 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:58.823 12:21:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:58.823 12:21:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:24:58.823 12:21:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:58.823 12:21:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:58.823 12:21:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:24:58.823 12:21:52 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:58.823 12:21:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:58.823 No valid GPT data, bailing 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # pt= 00:24:58.823 12:21:52 -- scripts/common.sh@392 -- # return 1 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:24:58.823 12:21:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:58.823 12:21:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:24:58.823 12:21:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:58.823 12:21:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:58.823 12:21:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:24:58.823 12:21:52 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:58.823 12:21:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:58.823 No valid GPT data, bailing 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # pt= 00:24:58.823 12:21:52 -- scripts/common.sh@392 -- # return 1 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:24:58.823 12:21:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:58.823 12:21:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:24:58.823 12:21:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:58.823 12:21:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:58.823 12:21:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:24:58.823 12:21:52 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:58.823 12:21:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:58.823 No valid GPT data, bailing 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:58.823 12:21:52 -- scripts/common.sh@391 -- # pt= 00:24:58.823 12:21:52 -- scripts/common.sh@392 -- # return 1 00:24:58.823 12:21:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:24:58.823 12:21:52 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:24:58.823 12:21:52 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.082 12:21:52 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.082 12:21:52 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:59.082 12:21:52 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:59.082 12:21:52 -- nvmf/common.sh@656 -- # echo 1 00:24:59.082 12:21:52 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:24:59.082 12:21:52 -- nvmf/common.sh@658 -- # echo 1 00:24:59.082 12:21:52 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:59.082 12:21:52 -- nvmf/common.sh@661 -- # echo tcp 00:24:59.082 12:21:52 -- nvmf/common.sh@662 -- # echo 4420 00:24:59.082 12:21:52 -- nvmf/common.sh@663 -- # echo ipv4 00:24:59.082 12:21:52 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:59.082 12:21:52 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -a 10.0.0.1 -t tcp -s 4420 00:24:59.082 00:24:59.082 Discovery Log Number of Records 2, Generation counter 2 00:24:59.082 =====Discovery Log Entry 0====== 00:24:59.082 trtype: tcp 00:24:59.082 adrfam: ipv4 00:24:59.082 subtype: current discovery subsystem 00:24:59.082 treq: not specified, sq flow control disable supported 00:24:59.082 portid: 1 00:24:59.082 trsvcid: 4420 00:24:59.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:59.082 traddr: 10.0.0.1 00:24:59.082 eflags: none 00:24:59.082 sectype: none 00:24:59.082 =====Discovery Log Entry 1====== 00:24:59.082 trtype: tcp 00:24:59.082 adrfam: ipv4 00:24:59.082 subtype: nvme subsystem 00:24:59.082 treq: not specified, sq flow control disable supported 00:24:59.082 portid: 1 00:24:59.082 trsvcid: 4420 00:24:59.082 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:59.082 traddr: 10.0.0.1 00:24:59.082 eflags: none 00:24:59.082 sectype: none 00:24:59.082 12:21:52 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:59.082 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:59.082 ===================================================== 00:24:59.082 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:59.082 ===================================================== 00:24:59.082 Controller Capabilities/Features 00:24:59.082 ================================ 00:24:59.082 Vendor ID: 0000 00:24:59.082 Subsystem Vendor ID: 0000 00:24:59.082 Serial Number: d2b21de9b9ee9a7925e9 00:24:59.082 Model Number: Linux 00:24:59.082 Firmware Version: 6.7.0-68 00:24:59.082 Recommended Arb Burst: 0 00:24:59.082 IEEE OUI Identifier: 00 00 00 00:24:59.082 Multi-path I/O 00:24:59.082 May have multiple subsystem ports: No 00:24:59.082 May have multiple controllers: No 00:24:59.082 Associated with SR-IOV VF: No 00:24:59.082 Max Data Transfer Size: Unlimited 00:24:59.082 Max Number of Namespaces: 0 00:24:59.082 Max Number of I/O Queues: 1024 00:24:59.082 NVMe Specification Version (VS): 1.3 00:24:59.082 NVMe Specification Version (Identify): 1.3 00:24:59.082 Maximum Queue Entries: 1024 00:24:59.082 Contiguous Queues Required: No 00:24:59.082 Arbitration Mechanisms Supported 00:24:59.082 Weighted Round Robin: Not Supported 00:24:59.082 Vendor Specific: Not Supported 00:24:59.082 Reset Timeout: 7500 ms 00:24:59.082 Doorbell Stride: 4 bytes 00:24:59.082 NVM Subsystem Reset: Not Supported 00:24:59.082 Command Sets Supported 00:24:59.082 NVM Command Set: Supported 00:24:59.082 Boot Partition: Not Supported 00:24:59.082 Memory Page Size Minimum: 4096 bytes 00:24:59.082 Memory Page Size Maximum: 4096 bytes 00:24:59.082 Persistent Memory Region: Not Supported 00:24:59.082 Optional Asynchronous Events Supported 00:24:59.082 Namespace Attribute Notices: Not Supported 00:24:59.082 Firmware Activation Notices: Not Supported 00:24:59.082 ANA Change Notices: Not Supported 00:24:59.082 PLE Aggregate Log Change Notices: Not Supported 00:24:59.082 LBA Status Info Alert Notices: Not Supported 00:24:59.082 EGE Aggregate Log Change Notices: Not Supported 00:24:59.082 Normal NVM Subsystem Shutdown event: Not Supported 00:24:59.082 Zone Descriptor Change Notices: Not Supported 00:24:59.082 Discovery Log Change Notices: Supported 00:24:59.082 Controller Attributes 00:24:59.082 128-bit Host Identifier: Not Supported 00:24:59.082 Non-Operational Permissive Mode: Not Supported 00:24:59.082 NVM Sets: Not Supported 00:24:59.082 Read Recovery Levels: Not Supported 00:24:59.082 Endurance Groups: Not Supported 00:24:59.082 Predictable Latency Mode: Not Supported 00:24:59.082 Traffic Based Keep ALive: Not Supported 00:24:59.082 Namespace Granularity: Not Supported 00:24:59.082 SQ Associations: Not Supported 00:24:59.082 UUID List: Not Supported 00:24:59.082 Multi-Domain Subsystem: Not Supported 00:24:59.082 Fixed Capacity Management: Not Supported 00:24:59.082 Variable Capacity Management: Not Supported 00:24:59.082 Delete Endurance Group: Not Supported 00:24:59.082 Delete NVM Set: Not Supported 00:24:59.082 Extended LBA Formats Supported: Not Supported 00:24:59.082 Flexible Data Placement Supported: Not Supported 00:24:59.082 00:24:59.082 Controller Memory Buffer Support 00:24:59.082 ================================ 00:24:59.082 Supported: No 00:24:59.082 00:24:59.082 Persistent Memory Region Support 00:24:59.082 ================================ 00:24:59.082 Supported: No 00:24:59.082 00:24:59.082 Admin Command Set Attributes 00:24:59.082 ============================ 00:24:59.082 Security Send/Receive: Not Supported 00:24:59.082 Format NVM: Not Supported 00:24:59.082 Firmware Activate/Download: Not Supported 00:24:59.082 Namespace Management: Not Supported 00:24:59.082 Device Self-Test: Not Supported 00:24:59.082 Directives: Not Supported 00:24:59.082 NVMe-MI: Not Supported 00:24:59.082 Virtualization Management: Not Supported 00:24:59.082 Doorbell Buffer Config: Not Supported 00:24:59.082 Get LBA Status Capability: Not Supported 00:24:59.082 Command & Feature Lockdown Capability: Not Supported 00:24:59.082 Abort Command Limit: 1 00:24:59.082 Async Event Request Limit: 1 00:24:59.082 Number of Firmware Slots: N/A 00:24:59.082 Firmware Slot 1 Read-Only: N/A 00:24:59.082 Firmware Activation Without Reset: N/A 00:24:59.082 Multiple Update Detection Support: N/A 00:24:59.082 Firmware Update Granularity: No Information Provided 00:24:59.082 Per-Namespace SMART Log: No 00:24:59.082 Asymmetric Namespace Access Log Page: Not Supported 00:24:59.082 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:59.082 Command Effects Log Page: Not Supported 00:24:59.082 Get Log Page Extended Data: Supported 00:24:59.082 Telemetry Log Pages: Not Supported 00:24:59.082 Persistent Event Log Pages: Not Supported 00:24:59.082 Supported Log Pages Log Page: May Support 00:24:59.082 Commands Supported & Effects Log Page: Not Supported 00:24:59.082 Feature Identifiers & Effects Log Page:May Support 00:24:59.082 NVMe-MI Commands & Effects Log Page: May Support 00:24:59.082 Data Area 4 for Telemetry Log: Not Supported 00:24:59.082 Error Log Page Entries Supported: 1 00:24:59.082 Keep Alive: Not Supported 00:24:59.082 00:24:59.082 NVM Command Set Attributes 00:24:59.082 ========================== 00:24:59.082 Submission Queue Entry Size 00:24:59.082 Max: 1 00:24:59.082 Min: 1 00:24:59.082 Completion Queue Entry Size 00:24:59.082 Max: 1 00:24:59.082 Min: 1 00:24:59.082 Number of Namespaces: 0 00:24:59.082 Compare Command: Not Supported 00:24:59.082 Write Uncorrectable Command: Not Supported 00:24:59.083 Dataset Management Command: Not Supported 00:24:59.083 Write Zeroes Command: Not Supported 00:24:59.083 Set Features Save Field: Not Supported 00:24:59.083 Reservations: Not Supported 00:24:59.083 Timestamp: Not Supported 00:24:59.083 Copy: Not Supported 00:24:59.083 Volatile Write Cache: Not Present 00:24:59.083 Atomic Write Unit (Normal): 1 00:24:59.083 Atomic Write Unit (PFail): 1 00:24:59.083 Atomic Compare & Write Unit: 1 00:24:59.083 Fused Compare & Write: Not Supported 00:24:59.083 Scatter-Gather List 00:24:59.083 SGL Command Set: Supported 00:24:59.083 SGL Keyed: Not Supported 00:24:59.083 SGL Bit Bucket Descriptor: Not Supported 00:24:59.083 SGL Metadata Pointer: Not Supported 00:24:59.083 Oversized SGL: Not Supported 00:24:59.083 SGL Metadata Address: Not Supported 00:24:59.083 SGL Offset: Supported 00:24:59.083 Transport SGL Data Block: Not Supported 00:24:59.083 Replay Protected Memory Block: Not Supported 00:24:59.083 00:24:59.083 Firmware Slot Information 00:24:59.083 ========================= 00:24:59.083 Active slot: 0 00:24:59.083 00:24:59.083 00:24:59.083 Error Log 00:24:59.083 ========= 00:24:59.083 00:24:59.083 Active Namespaces 00:24:59.083 ================= 00:24:59.083 Discovery Log Page 00:24:59.083 ================== 00:24:59.083 Generation Counter: 2 00:24:59.083 Number of Records: 2 00:24:59.083 Record Format: 0 00:24:59.083 00:24:59.083 Discovery Log Entry 0 00:24:59.083 ---------------------- 00:24:59.083 Transport Type: 3 (TCP) 00:24:59.083 Address Family: 1 (IPv4) 00:24:59.083 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:59.083 Entry Flags: 00:24:59.083 Duplicate Returned Information: 0 00:24:59.083 Explicit Persistent Connection Support for Discovery: 0 00:24:59.083 Transport Requirements: 00:24:59.083 Secure Channel: Not Specified 00:24:59.083 Port ID: 1 (0x0001) 00:24:59.083 Controller ID: 65535 (0xffff) 00:24:59.083 Admin Max SQ Size: 32 00:24:59.083 Transport Service Identifier: 4420 00:24:59.083 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:59.083 Transport Address: 10.0.0.1 00:24:59.083 Discovery Log Entry 1 00:24:59.083 ---------------------- 00:24:59.083 Transport Type: 3 (TCP) 00:24:59.083 Address Family: 1 (IPv4) 00:24:59.083 Subsystem Type: 2 (NVM Subsystem) 00:24:59.083 Entry Flags: 00:24:59.083 Duplicate Returned Information: 0 00:24:59.083 Explicit Persistent Connection Support for Discovery: 0 00:24:59.083 Transport Requirements: 00:24:59.083 Secure Channel: Not Specified 00:24:59.083 Port ID: 1 (0x0001) 00:24:59.083 Controller ID: 65535 (0xffff) 00:24:59.083 Admin Max SQ Size: 32 00:24:59.083 Transport Service Identifier: 4420 00:24:59.083 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:59.083 Transport Address: 10.0.0.1 00:24:59.083 12:21:52 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:59.342 get_feature(0x01) failed 00:24:59.342 get_feature(0x02) failed 00:24:59.342 get_feature(0x04) failed 00:24:59.342 ===================================================== 00:24:59.342 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:59.342 ===================================================== 00:24:59.342 Controller Capabilities/Features 00:24:59.342 ================================ 00:24:59.342 Vendor ID: 0000 00:24:59.342 Subsystem Vendor ID: 0000 00:24:59.342 Serial Number: cbab240f454a6a3fd85b 00:24:59.342 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:59.342 Firmware Version: 6.7.0-68 00:24:59.342 Recommended Arb Burst: 6 00:24:59.342 IEEE OUI Identifier: 00 00 00 00:24:59.342 Multi-path I/O 00:24:59.342 May have multiple subsystem ports: Yes 00:24:59.342 May have multiple controllers: Yes 00:24:59.342 Associated with SR-IOV VF: No 00:24:59.342 Max Data Transfer Size: Unlimited 00:24:59.342 Max Number of Namespaces: 1024 00:24:59.342 Max Number of I/O Queues: 128 00:24:59.342 NVMe Specification Version (VS): 1.3 00:24:59.342 NVMe Specification Version (Identify): 1.3 00:24:59.342 Maximum Queue Entries: 1024 00:24:59.342 Contiguous Queues Required: No 00:24:59.342 Arbitration Mechanisms Supported 00:24:59.342 Weighted Round Robin: Not Supported 00:24:59.342 Vendor Specific: Not Supported 00:24:59.342 Reset Timeout: 7500 ms 00:24:59.342 Doorbell Stride: 4 bytes 00:24:59.342 NVM Subsystem Reset: Not Supported 00:24:59.342 Command Sets Supported 00:24:59.342 NVM Command Set: Supported 00:24:59.342 Boot Partition: Not Supported 00:24:59.342 Memory Page Size Minimum: 4096 bytes 00:24:59.342 Memory Page Size Maximum: 4096 bytes 00:24:59.342 Persistent Memory Region: Not Supported 00:24:59.342 Optional Asynchronous Events Supported 00:24:59.342 Namespace Attribute Notices: Supported 00:24:59.342 Firmware Activation Notices: Not Supported 00:24:59.342 ANA Change Notices: Supported 00:24:59.342 PLE Aggregate Log Change Notices: Not Supported 00:24:59.342 LBA Status Info Alert Notices: Not Supported 00:24:59.342 EGE Aggregate Log Change Notices: Not Supported 00:24:59.342 Normal NVM Subsystem Shutdown event: Not Supported 00:24:59.342 Zone Descriptor Change Notices: Not Supported 00:24:59.342 Discovery Log Change Notices: Not Supported 00:24:59.342 Controller Attributes 00:24:59.342 128-bit Host Identifier: Supported 00:24:59.342 Non-Operational Permissive Mode: Not Supported 00:24:59.342 NVM Sets: Not Supported 00:24:59.342 Read Recovery Levels: Not Supported 00:24:59.342 Endurance Groups: Not Supported 00:24:59.342 Predictable Latency Mode: Not Supported 00:24:59.342 Traffic Based Keep ALive: Supported 00:24:59.342 Namespace Granularity: Not Supported 00:24:59.342 SQ Associations: Not Supported 00:24:59.342 UUID List: Not Supported 00:24:59.342 Multi-Domain Subsystem: Not Supported 00:24:59.342 Fixed Capacity Management: Not Supported 00:24:59.342 Variable Capacity Management: Not Supported 00:24:59.342 Delete Endurance Group: Not Supported 00:24:59.342 Delete NVM Set: Not Supported 00:24:59.342 Extended LBA Formats Supported: Not Supported 00:24:59.342 Flexible Data Placement Supported: Not Supported 00:24:59.342 00:24:59.342 Controller Memory Buffer Support 00:24:59.343 ================================ 00:24:59.343 Supported: No 00:24:59.343 00:24:59.343 Persistent Memory Region Support 00:24:59.343 ================================ 00:24:59.343 Supported: No 00:24:59.343 00:24:59.343 Admin Command Set Attributes 00:24:59.343 ============================ 00:24:59.343 Security Send/Receive: Not Supported 00:24:59.343 Format NVM: Not Supported 00:24:59.343 Firmware Activate/Download: Not Supported 00:24:59.343 Namespace Management: Not Supported 00:24:59.343 Device Self-Test: Not Supported 00:24:59.343 Directives: Not Supported 00:24:59.343 NVMe-MI: Not Supported 00:24:59.343 Virtualization Management: Not Supported 00:24:59.343 Doorbell Buffer Config: Not Supported 00:24:59.343 Get LBA Status Capability: Not Supported 00:24:59.343 Command & Feature Lockdown Capability: Not Supported 00:24:59.343 Abort Command Limit: 4 00:24:59.343 Async Event Request Limit: 4 00:24:59.343 Number of Firmware Slots: N/A 00:24:59.343 Firmware Slot 1 Read-Only: N/A 00:24:59.343 Firmware Activation Without Reset: N/A 00:24:59.343 Multiple Update Detection Support: N/A 00:24:59.343 Firmware Update Granularity: No Information Provided 00:24:59.343 Per-Namespace SMART Log: Yes 00:24:59.343 Asymmetric Namespace Access Log Page: Supported 00:24:59.343 ANA Transition Time : 10 sec 00:24:59.343 00:24:59.343 Asymmetric Namespace Access Capabilities 00:24:59.343 ANA Optimized State : Supported 00:24:59.343 ANA Non-Optimized State : Supported 00:24:59.343 ANA Inaccessible State : Supported 00:24:59.343 ANA Persistent Loss State : Supported 00:24:59.343 ANA Change State : Supported 00:24:59.343 ANAGRPID is not changed : No 00:24:59.343 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:59.343 00:24:59.343 ANA Group Identifier Maximum : 128 00:24:59.343 Number of ANA Group Identifiers : 128 00:24:59.343 Max Number of Allowed Namespaces : 1024 00:24:59.343 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:59.343 Command Effects Log Page: Supported 00:24:59.343 Get Log Page Extended Data: Supported 00:24:59.343 Telemetry Log Pages: Not Supported 00:24:59.343 Persistent Event Log Pages: Not Supported 00:24:59.343 Supported Log Pages Log Page: May Support 00:24:59.343 Commands Supported & Effects Log Page: Not Supported 00:24:59.343 Feature Identifiers & Effects Log Page:May Support 00:24:59.343 NVMe-MI Commands & Effects Log Page: May Support 00:24:59.343 Data Area 4 for Telemetry Log: Not Supported 00:24:59.343 Error Log Page Entries Supported: 128 00:24:59.343 Keep Alive: Supported 00:24:59.343 Keep Alive Granularity: 1000 ms 00:24:59.343 00:24:59.343 NVM Command Set Attributes 00:24:59.343 ========================== 00:24:59.343 Submission Queue Entry Size 00:24:59.343 Max: 64 00:24:59.343 Min: 64 00:24:59.343 Completion Queue Entry Size 00:24:59.343 Max: 16 00:24:59.343 Min: 16 00:24:59.343 Number of Namespaces: 1024 00:24:59.343 Compare Command: Not Supported 00:24:59.343 Write Uncorrectable Command: Not Supported 00:24:59.343 Dataset Management Command: Supported 00:24:59.343 Write Zeroes Command: Supported 00:24:59.343 Set Features Save Field: Not Supported 00:24:59.343 Reservations: Not Supported 00:24:59.343 Timestamp: Not Supported 00:24:59.343 Copy: Not Supported 00:24:59.343 Volatile Write Cache: Present 00:24:59.343 Atomic Write Unit (Normal): 1 00:24:59.343 Atomic Write Unit (PFail): 1 00:24:59.343 Atomic Compare & Write Unit: 1 00:24:59.343 Fused Compare & Write: Not Supported 00:24:59.343 Scatter-Gather List 00:24:59.343 SGL Command Set: Supported 00:24:59.343 SGL Keyed: Not Supported 00:24:59.343 SGL Bit Bucket Descriptor: Not Supported 00:24:59.343 SGL Metadata Pointer: Not Supported 00:24:59.343 Oversized SGL: Not Supported 00:24:59.343 SGL Metadata Address: Not Supported 00:24:59.343 SGL Offset: Supported 00:24:59.343 Transport SGL Data Block: Not Supported 00:24:59.343 Replay Protected Memory Block: Not Supported 00:24:59.343 00:24:59.343 Firmware Slot Information 00:24:59.343 ========================= 00:24:59.343 Active slot: 0 00:24:59.343 00:24:59.343 Asymmetric Namespace Access 00:24:59.343 =========================== 00:24:59.343 Change Count : 0 00:24:59.343 Number of ANA Group Descriptors : 1 00:24:59.343 ANA Group Descriptor : 0 00:24:59.343 ANA Group ID : 1 00:24:59.343 Number of NSID Values : 1 00:24:59.343 Change Count : 0 00:24:59.343 ANA State : 1 00:24:59.343 Namespace Identifier : 1 00:24:59.343 00:24:59.343 Commands Supported and Effects 00:24:59.343 ============================== 00:24:59.343 Admin Commands 00:24:59.343 -------------- 00:24:59.343 Get Log Page (02h): Supported 00:24:59.343 Identify (06h): Supported 00:24:59.343 Abort (08h): Supported 00:24:59.343 Set Features (09h): Supported 00:24:59.343 Get Features (0Ah): Supported 00:24:59.343 Asynchronous Event Request (0Ch): Supported 00:24:59.343 Keep Alive (18h): Supported 00:24:59.343 I/O Commands 00:24:59.343 ------------ 00:24:59.343 Flush (00h): Supported 00:24:59.343 Write (01h): Supported LBA-Change 00:24:59.343 Read (02h): Supported 00:24:59.343 Write Zeroes (08h): Supported LBA-Change 00:24:59.343 Dataset Management (09h): Supported 00:24:59.343 00:24:59.343 Error Log 00:24:59.343 ========= 00:24:59.343 Entry: 0 00:24:59.343 Error Count: 0x3 00:24:59.343 Submission Queue Id: 0x0 00:24:59.343 Command Id: 0x5 00:24:59.343 Phase Bit: 0 00:24:59.343 Status Code: 0x2 00:24:59.343 Status Code Type: 0x0 00:24:59.343 Do Not Retry: 1 00:24:59.343 Error Location: 0x28 00:24:59.343 LBA: 0x0 00:24:59.343 Namespace: 0x0 00:24:59.343 Vendor Log Page: 0x0 00:24:59.343 ----------- 00:24:59.343 Entry: 1 00:24:59.343 Error Count: 0x2 00:24:59.343 Submission Queue Id: 0x0 00:24:59.343 Command Id: 0x5 00:24:59.343 Phase Bit: 0 00:24:59.343 Status Code: 0x2 00:24:59.343 Status Code Type: 0x0 00:24:59.343 Do Not Retry: 1 00:24:59.343 Error Location: 0x28 00:24:59.343 LBA: 0x0 00:24:59.343 Namespace: 0x0 00:24:59.343 Vendor Log Page: 0x0 00:24:59.343 ----------- 00:24:59.343 Entry: 2 00:24:59.343 Error Count: 0x1 00:24:59.343 Submission Queue Id: 0x0 00:24:59.343 Command Id: 0x4 00:24:59.343 Phase Bit: 0 00:24:59.343 Status Code: 0x2 00:24:59.343 Status Code Type: 0x0 00:24:59.343 Do Not Retry: 1 00:24:59.343 Error Location: 0x28 00:24:59.343 LBA: 0x0 00:24:59.343 Namespace: 0x0 00:24:59.343 Vendor Log Page: 0x0 00:24:59.343 00:24:59.343 Number of Queues 00:24:59.343 ================ 00:24:59.343 Number of I/O Submission Queues: 128 00:24:59.343 Number of I/O Completion Queues: 128 00:24:59.343 00:24:59.343 ZNS Specific Controller Data 00:24:59.343 ============================ 00:24:59.343 Zone Append Size Limit: 0 00:24:59.343 00:24:59.343 00:24:59.343 Active Namespaces 00:24:59.343 ================= 00:24:59.343 get_feature(0x05) failed 00:24:59.343 Namespace ID:1 00:24:59.343 Command Set Identifier: NVM (00h) 00:24:59.343 Deallocate: Supported 00:24:59.343 Deallocated/Unwritten Error: Not Supported 00:24:59.343 Deallocated Read Value: Unknown 00:24:59.343 Deallocate in Write Zeroes: Not Supported 00:24:59.343 Deallocated Guard Field: 0xFFFF 00:24:59.343 Flush: Supported 00:24:59.343 Reservation: Not Supported 00:24:59.343 Namespace Sharing Capabilities: Multiple Controllers 00:24:59.343 Size (in LBAs): 1310720 (5GiB) 00:24:59.343 Capacity (in LBAs): 1310720 (5GiB) 00:24:59.343 Utilization (in LBAs): 1310720 (5GiB) 00:24:59.343 UUID: 1d5e31f6-d4ba-448a-8d93-acffa8cded70 00:24:59.343 Thin Provisioning: Not Supported 00:24:59.343 Per-NS Atomic Units: Yes 00:24:59.343 Atomic Boundary Size (Normal): 0 00:24:59.343 Atomic Boundary Size (PFail): 0 00:24:59.343 Atomic Boundary Offset: 0 00:24:59.343 NGUID/EUI64 Never Reused: No 00:24:59.343 ANA group ID: 1 00:24:59.343 Namespace Write Protected: No 00:24:59.343 Number of LBA Formats: 1 00:24:59.343 Current LBA Format: LBA Format #00 00:24:59.343 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:24:59.343 00:24:59.343 12:21:52 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:59.343 12:21:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:59.343 12:21:52 -- nvmf/common.sh@117 -- # sync 00:24:59.343 12:21:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.343 12:21:52 -- nvmf/common.sh@120 -- # set +e 00:24:59.343 12:21:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.343 12:21:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.343 rmmod nvme_tcp 00:24:59.343 rmmod nvme_fabrics 00:24:59.344 12:21:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.344 12:21:52 -- nvmf/common.sh@124 -- # set -e 00:24:59.344 12:21:52 -- nvmf/common.sh@125 -- # return 0 00:24:59.344 12:21:52 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:59.344 12:21:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:59.344 12:21:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:59.344 12:21:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:59.344 12:21:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.344 12:21:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.344 12:21:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.344 12:21:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.344 12:21:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.603 12:21:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:59.603 12:21:52 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:59.603 12:21:52 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:59.603 12:21:52 -- nvmf/common.sh@675 -- # echo 0 00:24:59.603 12:21:52 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.603 12:21:52 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.603 12:21:52 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:59.603 12:21:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.603 12:21:52 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:59.603 12:21:52 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:59.603 12:21:52 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:00.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.426 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:00.426 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:00.426 00:25:00.426 real 0m2.811s 00:25:00.426 user 0m0.955s 00:25:00.426 sys 0m1.361s 00:25:00.426 12:21:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:00.426 12:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:00.426 ************************************ 00:25:00.426 END TEST nvmf_identify_kernel_target 00:25:00.426 ************************************ 00:25:00.426 12:21:53 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:00.426 12:21:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:00.426 12:21:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:00.426 12:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:00.426 ************************************ 00:25:00.426 START TEST nvmf_auth 00:25:00.426 ************************************ 00:25:00.426 12:21:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:00.684 * Looking for test storage... 00:25:00.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:00.684 12:21:53 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:00.685 12:21:53 -- nvmf/common.sh@7 -- # uname -s 00:25:00.685 12:21:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.685 12:21:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.685 12:21:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.685 12:21:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.685 12:21:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.685 12:21:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.685 12:21:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.685 12:21:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.685 12:21:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.685 12:21:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.685 12:21:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:25:00.685 12:21:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:25:00.685 12:21:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.685 12:21:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.685 12:21:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:00.685 12:21:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.685 12:21:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.685 12:21:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.685 12:21:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.685 12:21:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.685 12:21:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.685 12:21:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.685 12:21:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.685 12:21:53 -- paths/export.sh@5 -- # export PATH 00:25:00.685 12:21:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.685 12:21:53 -- nvmf/common.sh@47 -- # : 0 00:25:00.685 12:21:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.685 12:21:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.685 12:21:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.685 12:21:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.685 12:21:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.685 12:21:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.685 12:21:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.685 12:21:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.685 12:21:53 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:00.685 12:21:53 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:00.685 12:21:53 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:00.685 12:21:53 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:00.685 12:21:53 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:00.685 12:21:53 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:00.685 12:21:53 -- host/auth.sh@21 -- # keys=() 00:25:00.685 12:21:53 -- host/auth.sh@77 -- # nvmftestinit 00:25:00.685 12:21:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:00.685 12:21:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.685 12:21:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:00.685 12:21:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:00.685 12:21:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:00.685 12:21:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.685 12:21:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.685 12:21:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.685 12:21:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:00.685 12:21:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:00.685 12:21:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:00.685 12:21:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:00.685 12:21:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:00.685 12:21:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:00.685 12:21:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.685 12:21:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.685 12:21:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:00.685 12:21:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:00.685 12:21:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:00.685 12:21:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:00.685 12:21:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:00.685 12:21:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.685 12:21:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:00.685 12:21:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:00.685 12:21:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:00.685 12:21:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:00.685 12:21:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:00.685 12:21:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:00.685 Cannot find device "nvmf_tgt_br" 00:25:00.685 12:21:54 -- nvmf/common.sh@155 -- # true 00:25:00.685 12:21:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:00.685 Cannot find device "nvmf_tgt_br2" 00:25:00.685 12:21:54 -- nvmf/common.sh@156 -- # true 00:25:00.685 12:21:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:00.685 12:21:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:00.685 Cannot find device "nvmf_tgt_br" 00:25:00.685 12:21:54 -- nvmf/common.sh@158 -- # true 00:25:00.685 12:21:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:00.685 Cannot find device "nvmf_tgt_br2" 00:25:00.685 12:21:54 -- nvmf/common.sh@159 -- # true 00:25:00.685 12:21:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:00.685 12:21:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:00.685 12:21:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:00.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.685 12:21:54 -- nvmf/common.sh@162 -- # true 00:25:00.685 12:21:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:00.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.685 12:21:54 -- nvmf/common.sh@163 -- # true 00:25:00.685 12:21:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:00.685 12:21:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:00.685 12:21:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:00.685 12:21:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:00.685 12:21:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:00.942 12:21:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:00.942 12:21:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:00.942 12:21:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:00.942 12:21:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:00.942 12:21:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:00.942 12:21:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:00.942 12:21:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:00.942 12:21:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:00.942 12:21:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:00.943 12:21:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:00.943 12:21:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:00.943 12:21:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:00.943 12:21:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:00.943 12:21:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:00.943 12:21:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:00.943 12:21:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:00.943 12:21:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:00.943 12:21:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:00.943 12:21:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:00.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:25:00.943 00:25:00.943 --- 10.0.0.2 ping statistics --- 00:25:00.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.943 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:00.943 12:21:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:00.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:00.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:25:00.943 00:25:00.943 --- 10.0.0.3 ping statistics --- 00:25:00.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.943 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:00.943 12:21:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:00.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:00.943 00:25:00.943 --- 10.0.0.1 ping statistics --- 00:25:00.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.943 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:00.943 12:21:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.943 12:21:54 -- nvmf/common.sh@422 -- # return 0 00:25:00.943 12:21:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:00.943 12:21:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.943 12:21:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:00.943 12:21:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:00.943 12:21:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.943 12:21:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:00.943 12:21:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:00.943 12:21:54 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:25:00.943 12:21:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:00.943 12:21:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:00.943 12:21:54 -- common/autotest_common.sh@10 -- # set +x 00:25:00.943 12:21:54 -- nvmf/common.sh@470 -- # nvmfpid=74715 00:25:00.943 12:21:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:00.943 12:21:54 -- nvmf/common.sh@471 -- # waitforlisten 74715 00:25:00.943 12:21:54 -- common/autotest_common.sh@817 -- # '[' -z 74715 ']' 00:25:00.943 12:21:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.943 12:21:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.943 12:21:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.943 12:21:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.943 12:21:54 -- common/autotest_common.sh@10 -- # set +x 00:25:01.878 12:21:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.878 12:21:55 -- common/autotest_common.sh@850 -- # return 0 00:25:01.878 12:21:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:01.878 12:21:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:01.878 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:01.878 12:21:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.878 12:21:55 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:01.878 12:21:55 -- host/auth.sh@81 -- # gen_key null 32 00:25:01.878 12:21:55 -- host/auth.sh@53 -- # local digest len file key 00:25:01.878 12:21:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:01.878 12:21:55 -- host/auth.sh@54 -- # local -A digests 00:25:01.878 12:21:55 -- host/auth.sh@56 -- # digest=null 00:25:01.878 12:21:55 -- host/auth.sh@56 -- # len=32 00:25:01.878 12:21:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:01.878 12:21:55 -- host/auth.sh@57 -- # key=c9d1c6439c1cae99f70e531302849fc9 00:25:01.878 12:21:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.4rx 00:25:02.137 12:21:55 -- host/auth.sh@59 -- # format_dhchap_key c9d1c6439c1cae99f70e531302849fc9 0 00:25:02.137 12:21:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 c9d1c6439c1cae99f70e531302849fc9 0 00:25:02.137 12:21:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # key=c9d1c6439c1cae99f70e531302849fc9 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # digest=0 00:25:02.137 12:21:55 -- nvmf/common.sh@694 -- # python - 00:25:02.137 12:21:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.4rx 00:25:02.137 12:21:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.4rx 00:25:02.137 12:21:55 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.4rx 00:25:02.137 12:21:55 -- host/auth.sh@82 -- # gen_key null 48 00:25:02.137 12:21:55 -- host/auth.sh@53 -- # local digest len file key 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # local -A digests 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # digest=null 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # len=48 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # key=7b42deb7382e2e5e1ddef99e5c12a2950ec623b3a94f32a0 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Izw 00:25:02.137 12:21:55 -- host/auth.sh@59 -- # format_dhchap_key 7b42deb7382e2e5e1ddef99e5c12a2950ec623b3a94f32a0 0 00:25:02.137 12:21:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 7b42deb7382e2e5e1ddef99e5c12a2950ec623b3a94f32a0 0 00:25:02.137 12:21:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # key=7b42deb7382e2e5e1ddef99e5c12a2950ec623b3a94f32a0 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # digest=0 00:25:02.137 12:21:55 -- nvmf/common.sh@694 -- # python - 00:25:02.137 12:21:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Izw 00:25:02.137 12:21:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Izw 00:25:02.137 12:21:55 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.Izw 00:25:02.137 12:21:55 -- host/auth.sh@83 -- # gen_key sha256 32 00:25:02.137 12:21:55 -- host/auth.sh@53 -- # local digest len file key 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # local -A digests 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # digest=sha256 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # len=32 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # key=b3aac662c27c7c001932643ffd2c8707 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.FpP 00:25:02.137 12:21:55 -- host/auth.sh@59 -- # format_dhchap_key b3aac662c27c7c001932643ffd2c8707 1 00:25:02.137 12:21:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 b3aac662c27c7c001932643ffd2c8707 1 00:25:02.137 12:21:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # key=b3aac662c27c7c001932643ffd2c8707 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # digest=1 00:25:02.137 12:21:55 -- nvmf/common.sh@694 -- # python - 00:25:02.137 12:21:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.FpP 00:25:02.137 12:21:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.FpP 00:25:02.137 12:21:55 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.FpP 00:25:02.137 12:21:55 -- host/auth.sh@84 -- # gen_key sha384 48 00:25:02.137 12:21:55 -- host/auth.sh@53 -- # local digest len file key 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # local -A digests 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # digest=sha384 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # len=48 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # key=a5d6a0a6858cc147cbd75ba38f8a39a92d667061487e2d81 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.b4S 00:25:02.137 12:21:55 -- host/auth.sh@59 -- # format_dhchap_key a5d6a0a6858cc147cbd75ba38f8a39a92d667061487e2d81 2 00:25:02.137 12:21:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 a5d6a0a6858cc147cbd75ba38f8a39a92d667061487e2d81 2 00:25:02.137 12:21:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # key=a5d6a0a6858cc147cbd75ba38f8a39a92d667061487e2d81 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # digest=2 00:25:02.137 12:21:55 -- nvmf/common.sh@694 -- # python - 00:25:02.137 12:21:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.b4S 00:25:02.137 12:21:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.b4S 00:25:02.137 12:21:55 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.b4S 00:25:02.137 12:21:55 -- host/auth.sh@85 -- # gen_key sha512 64 00:25:02.137 12:21:55 -- host/auth.sh@53 -- # local digest len file key 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:02.137 12:21:55 -- host/auth.sh@54 -- # local -A digests 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # digest=sha512 00:25:02.137 12:21:55 -- host/auth.sh@56 -- # len=64 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:02.137 12:21:55 -- host/auth.sh@57 -- # key=df6ed59545b01518252c9702540f81da3a4070bfdaa765335891beb320280568 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:25:02.137 12:21:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.pax 00:25:02.137 12:21:55 -- host/auth.sh@59 -- # format_dhchap_key df6ed59545b01518252c9702540f81da3a4070bfdaa765335891beb320280568 3 00:25:02.137 12:21:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 df6ed59545b01518252c9702540f81da3a4070bfdaa765335891beb320280568 3 00:25:02.137 12:21:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # key=df6ed59545b01518252c9702540f81da3a4070bfdaa765335891beb320280568 00:25:02.137 12:21:55 -- nvmf/common.sh@693 -- # digest=3 00:25:02.137 12:21:55 -- nvmf/common.sh@694 -- # python - 00:25:02.396 12:21:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.pax 00:25:02.396 12:21:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.pax 00:25:02.396 12:21:55 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.pax 00:25:02.396 12:21:55 -- host/auth.sh@87 -- # waitforlisten 74715 00:25:02.396 12:21:55 -- common/autotest_common.sh@817 -- # '[' -z 74715 ']' 00:25:02.396 12:21:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.396 12:21:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:02.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.396 12:21:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.396 12:21:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:02.396 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 12:21:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:02.742 12:21:55 -- common/autotest_common.sh@850 -- # return 0 00:25:02.742 12:21:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:02.742 12:21:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4rx 00:25:02.742 12:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.742 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 12:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.742 12:21:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:02.742 12:21:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Izw 00:25:02.742 12:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.742 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 12:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.742 12:21:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:02.742 12:21:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FpP 00:25:02.742 12:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.742 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 12:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.742 12:21:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:02.742 12:21:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.b4S 00:25:02.742 12:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.742 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 12:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.742 12:21:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:02.742 12:21:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pax 00:25:02.742 12:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.742 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 12:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.742 12:21:55 -- host/auth.sh@92 -- # nvmet_auth_init 00:25:02.742 12:21:55 -- host/auth.sh@35 -- # get_main_ns_ip 00:25:02.742 12:21:55 -- nvmf/common.sh@717 -- # local ip 00:25:02.742 12:21:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.742 12:21:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.742 12:21:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.742 12:21:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.742 12:21:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.742 12:21:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.742 12:21:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.742 12:21:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.742 12:21:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.742 12:21:55 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:02.742 12:21:55 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:02.742 12:21:55 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:02.742 12:21:55 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.742 12:21:55 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:02.742 12:21:55 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:02.742 12:21:55 -- nvmf/common.sh@628 -- # local block nvme 00:25:02.742 12:21:55 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:02.742 12:21:55 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:02.742 12:21:55 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:02.742 12:21:55 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:03.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.001 Waiting for block devices as requested 00:25:03.001 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:03.001 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:03.568 12:21:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:03.568 12:21:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:03.568 12:21:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:25:03.568 12:21:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:03.568 12:21:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:03.568 12:21:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:03.568 12:21:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:25:03.568 12:21:57 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:03.568 12:21:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:03.827 No valid GPT data, bailing 00:25:03.827 12:21:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:03.827 12:21:57 -- scripts/common.sh@391 -- # pt= 00:25:03.827 12:21:57 -- scripts/common.sh@392 -- # return 1 00:25:03.827 12:21:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:25:03.827 12:21:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:03.827 12:21:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:03.827 12:21:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:25:03.827 12:21:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:03.827 12:21:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:03.827 12:21:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:03.827 12:21:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:25:03.827 12:21:57 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:03.827 12:21:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:03.827 No valid GPT data, bailing 00:25:03.827 12:21:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:03.827 12:21:57 -- scripts/common.sh@391 -- # pt= 00:25:03.827 12:21:57 -- scripts/common.sh@392 -- # return 1 00:25:03.827 12:21:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:25:03.827 12:21:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:03.827 12:21:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:03.827 12:21:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:25:03.827 12:21:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:03.827 12:21:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:03.827 12:21:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:03.827 12:21:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:25:03.827 12:21:57 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:03.827 12:21:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:03.827 No valid GPT data, bailing 00:25:03.827 12:21:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:03.827 12:21:57 -- scripts/common.sh@391 -- # pt= 00:25:03.827 12:21:57 -- scripts/common.sh@392 -- # return 1 00:25:03.827 12:21:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:25:03.827 12:21:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:03.827 12:21:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:03.827 12:21:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:25:03.827 12:21:57 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:03.827 12:21:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:03.827 12:21:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:03.827 12:21:57 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:25:03.827 12:21:57 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:03.827 12:21:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:04.086 No valid GPT data, bailing 00:25:04.086 12:21:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:04.086 12:21:57 -- scripts/common.sh@391 -- # pt= 00:25:04.086 12:21:57 -- scripts/common.sh@392 -- # return 1 00:25:04.086 12:21:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:25:04.086 12:21:57 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:25:04.086 12:21:57 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:04.086 12:21:57 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:04.086 12:21:57 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:04.086 12:21:57 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:04.086 12:21:57 -- nvmf/common.sh@656 -- # echo 1 00:25:04.086 12:21:57 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:25:04.086 12:21:57 -- nvmf/common.sh@658 -- # echo 1 00:25:04.086 12:21:57 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:25:04.086 12:21:57 -- nvmf/common.sh@661 -- # echo tcp 00:25:04.086 12:21:57 -- nvmf/common.sh@662 -- # echo 4420 00:25:04.086 12:21:57 -- nvmf/common.sh@663 -- # echo ipv4 00:25:04.086 12:21:57 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:04.086 12:21:57 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -a 10.0.0.1 -t tcp -s 4420 00:25:04.086 00:25:04.086 Discovery Log Number of Records 2, Generation counter 2 00:25:04.086 =====Discovery Log Entry 0====== 00:25:04.086 trtype: tcp 00:25:04.086 adrfam: ipv4 00:25:04.086 subtype: current discovery subsystem 00:25:04.086 treq: not specified, sq flow control disable supported 00:25:04.086 portid: 1 00:25:04.086 trsvcid: 4420 00:25:04.086 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:04.086 traddr: 10.0.0.1 00:25:04.086 eflags: none 00:25:04.086 sectype: none 00:25:04.086 =====Discovery Log Entry 1====== 00:25:04.086 trtype: tcp 00:25:04.086 adrfam: ipv4 00:25:04.086 subtype: nvme subsystem 00:25:04.086 treq: not specified, sq flow control disable supported 00:25:04.086 portid: 1 00:25:04.086 trsvcid: 4420 00:25:04.086 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:04.086 traddr: 10.0.0.1 00:25:04.086 eflags: none 00:25:04.086 sectype: none 00:25:04.086 12:21:57 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:04.086 12:21:57 -- host/auth.sh@37 -- # echo 0 00:25:04.086 12:21:57 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:04.086 12:21:57 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.086 12:21:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.086 12:21:57 -- host/auth.sh@44 -- # digest=sha256 00:25:04.086 12:21:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.086 12:21:57 -- host/auth.sh@44 -- # keyid=1 00:25:04.086 12:21:57 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:04.086 12:21:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:04.086 12:21:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:04.086 12:21:57 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:04.086 12:21:57 -- host/auth.sh@100 -- # IFS=, 00:25:04.086 12:21:57 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:25:04.086 12:21:57 -- host/auth.sh@100 -- # IFS=, 00:25:04.086 12:21:57 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:04.086 12:21:57 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:04.086 12:21:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.086 12:21:57 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:25:04.086 12:21:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:04.086 12:21:57 -- host/auth.sh@68 -- # keyid=1 00:25:04.086 12:21:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:04.086 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.086 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.086 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.086 12:21:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.086 12:21:57 -- nvmf/common.sh@717 -- # local ip 00:25:04.086 12:21:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.086 12:21:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.086 12:21:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.086 12:21:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.086 12:21:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.086 12:21:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.086 12:21:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.086 12:21:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.086 12:21:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.086 12:21:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:04.086 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.086 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.345 nvme0n1 00:25:04.345 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.345 12:21:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.345 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.345 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.345 12:21:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.345 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.345 12:21:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.345 12:21:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.345 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.345 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.345 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.345 12:21:57 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:04.345 12:21:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.345 12:21:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.345 12:21:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:04.345 12:21:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.345 12:21:57 -- host/auth.sh@44 -- # digest=sha256 00:25:04.345 12:21:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.345 12:21:57 -- host/auth.sh@44 -- # keyid=0 00:25:04.345 12:21:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:04.345 12:21:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:04.345 12:21:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:04.345 12:21:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:04.345 12:21:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:25:04.345 12:21:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.345 12:21:57 -- host/auth.sh@68 -- # digest=sha256 00:25:04.345 12:21:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:04.345 12:21:57 -- host/auth.sh@68 -- # keyid=0 00:25:04.345 12:21:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.345 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.345 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.345 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.345 12:21:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.345 12:21:57 -- nvmf/common.sh@717 -- # local ip 00:25:04.345 12:21:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.345 12:21:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.345 12:21:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.345 12:21:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.345 12:21:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.345 12:21:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.345 12:21:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.345 12:21:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.345 12:21:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.345 12:21:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:04.345 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.345 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.345 nvme0n1 00:25:04.345 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.345 12:21:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.345 12:21:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.345 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.345 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.605 12:21:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.605 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.605 12:21:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.605 12:21:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.605 12:21:57 -- host/auth.sh@44 -- # digest=sha256 00:25:04.605 12:21:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.605 12:21:57 -- host/auth.sh@44 -- # keyid=1 00:25:04.605 12:21:57 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:04.605 12:21:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:04.605 12:21:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:04.605 12:21:57 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:04.605 12:21:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:25:04.605 12:21:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.605 12:21:57 -- host/auth.sh@68 -- # digest=sha256 00:25:04.605 12:21:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:04.605 12:21:57 -- host/auth.sh@68 -- # keyid=1 00:25:04.605 12:21:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.605 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.605 12:21:57 -- nvmf/common.sh@717 -- # local ip 00:25:04.605 12:21:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.605 12:21:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.605 12:21:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.605 12:21:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.605 12:21:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.605 12:21:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.605 12:21:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.605 12:21:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.605 12:21:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.605 12:21:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:04.605 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 nvme0n1 00:25:04.605 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.605 12:21:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.605 12:21:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 12:21:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.605 12:21:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.605 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.605 12:21:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:04.605 12:21:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.605 12:21:58 -- host/auth.sh@44 -- # digest=sha256 00:25:04.605 12:21:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.605 12:21:58 -- host/auth.sh@44 -- # keyid=2 00:25:04.605 12:21:58 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:04.605 12:21:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:04.605 12:21:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:04.605 12:21:58 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:04.605 12:21:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:25:04.605 12:21:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.605 12:21:58 -- host/auth.sh@68 -- # digest=sha256 00:25:04.605 12:21:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:04.605 12:21:58 -- host/auth.sh@68 -- # keyid=2 00:25:04.605 12:21:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.605 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.605 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.605 12:21:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.605 12:21:58 -- nvmf/common.sh@717 -- # local ip 00:25:04.605 12:21:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.605 12:21:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.605 12:21:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.605 12:21:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.605 12:21:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.605 12:21:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.605 12:21:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.605 12:21:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.605 12:21:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.605 12:21:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:04.605 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.605 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.864 nvme0n1 00:25:04.864 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.864 12:21:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.864 12:21:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.864 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.864 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.864 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.864 12:21:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.864 12:21:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.864 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.864 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.864 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.864 12:21:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.864 12:21:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:04.864 12:21:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.864 12:21:58 -- host/auth.sh@44 -- # digest=sha256 00:25:04.864 12:21:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.864 12:21:58 -- host/auth.sh@44 -- # keyid=3 00:25:04.864 12:21:58 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:04.864 12:21:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:04.864 12:21:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:04.864 12:21:58 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:04.864 12:21:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:25:04.864 12:21:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.864 12:21:58 -- host/auth.sh@68 -- # digest=sha256 00:25:04.864 12:21:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:04.864 12:21:58 -- host/auth.sh@68 -- # keyid=3 00:25:04.864 12:21:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.864 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.864 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.864 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.864 12:21:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.864 12:21:58 -- nvmf/common.sh@717 -- # local ip 00:25:04.864 12:21:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.864 12:21:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.864 12:21:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.864 12:21:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.864 12:21:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.864 12:21:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.864 12:21:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.864 12:21:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.864 12:21:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.864 12:21:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:04.864 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.864 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:04.864 nvme0n1 00:25:04.864 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.864 12:21:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.864 12:21:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.865 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.865 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.124 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.124 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.124 12:21:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:05.124 12:21:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.124 12:21:58 -- host/auth.sh@44 -- # digest=sha256 00:25:05.124 12:21:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.124 12:21:58 -- host/auth.sh@44 -- # keyid=4 00:25:05.124 12:21:58 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:05.124 12:21:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:05.124 12:21:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:05.124 12:21:58 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:05.124 12:21:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:25:05.124 12:21:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.124 12:21:58 -- host/auth.sh@68 -- # digest=sha256 00:25:05.124 12:21:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:05.124 12:21:58 -- host/auth.sh@68 -- # keyid=4 00:25:05.124 12:21:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:05.124 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.124 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.124 12:21:58 -- nvmf/common.sh@717 -- # local ip 00:25:05.124 12:21:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.124 12:21:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.124 12:21:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.124 12:21:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.124 12:21:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.124 12:21:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.124 12:21:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.124 12:21:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.124 12:21:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.124 12:21:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.124 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.124 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 nvme0n1 00:25:05.124 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.124 12:21:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.124 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.124 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.124 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.124 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.124 12:21:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.124 12:21:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.124 12:21:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:05.124 12:21:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.124 12:21:58 -- host/auth.sh@44 -- # digest=sha256 00:25:05.124 12:21:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.124 12:21:58 -- host/auth.sh@44 -- # keyid=0 00:25:05.124 12:21:58 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:05.124 12:21:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:05.124 12:21:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:05.383 12:21:58 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:05.383 12:21:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:25:05.383 12:21:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.383 12:21:58 -- host/auth.sh@68 -- # digest=sha256 00:25:05.383 12:21:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:05.383 12:21:58 -- host/auth.sh@68 -- # keyid=0 00:25:05.383 12:21:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.383 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.383 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.641 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.641 12:21:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.641 12:21:58 -- nvmf/common.sh@717 -- # local ip 00:25:05.641 12:21:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.641 12:21:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.641 12:21:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.641 12:21:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.641 12:21:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.641 12:21:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.641 12:21:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.641 12:21:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.641 12:21:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.641 12:21:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:05.641 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.641 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.641 nvme0n1 00:25:05.641 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.641 12:21:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.641 12:21:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.641 12:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.641 12:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.641 12:21:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.641 12:21:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.641 12:21:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.641 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.641 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.641 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.641 12:21:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.642 12:21:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:05.642 12:21:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.642 12:21:59 -- host/auth.sh@44 -- # digest=sha256 00:25:05.642 12:21:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.642 12:21:59 -- host/auth.sh@44 -- # keyid=1 00:25:05.642 12:21:59 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:05.642 12:21:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:05.642 12:21:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:05.642 12:21:59 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:05.642 12:21:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:25:05.642 12:21:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.642 12:21:59 -- host/auth.sh@68 -- # digest=sha256 00:25:05.642 12:21:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:05.642 12:21:59 -- host/auth.sh@68 -- # keyid=1 00:25:05.642 12:21:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.642 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.642 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.642 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.642 12:21:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.642 12:21:59 -- nvmf/common.sh@717 -- # local ip 00:25:05.642 12:21:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.642 12:21:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.642 12:21:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.642 12:21:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.642 12:21:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.642 12:21:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.642 12:21:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.642 12:21:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.642 12:21:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.642 12:21:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:05.642 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.642 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.901 nvme0n1 00:25:05.901 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.901 12:21:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.901 12:21:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.901 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.901 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.901 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.901 12:21:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.901 12:21:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.901 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.901 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.901 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.901 12:21:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.901 12:21:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:05.901 12:21:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.901 12:21:59 -- host/auth.sh@44 -- # digest=sha256 00:25:05.901 12:21:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.901 12:21:59 -- host/auth.sh@44 -- # keyid=2 00:25:05.901 12:21:59 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:05.901 12:21:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:05.901 12:21:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:05.901 12:21:59 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:05.901 12:21:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:25:05.901 12:21:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.901 12:21:59 -- host/auth.sh@68 -- # digest=sha256 00:25:05.901 12:21:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:05.901 12:21:59 -- host/auth.sh@68 -- # keyid=2 00:25:05.901 12:21:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.901 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.901 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.901 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.901 12:21:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.901 12:21:59 -- nvmf/common.sh@717 -- # local ip 00:25:05.901 12:21:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.901 12:21:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.901 12:21:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.901 12:21:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.901 12:21:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.901 12:21:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.901 12:21:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.901 12:21:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.901 12:21:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.901 12:21:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:05.901 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.901 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.161 nvme0n1 00:25:06.161 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.161 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.161 12:21:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:06.161 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.161 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.161 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.161 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.161 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:06.161 12:21:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:06.161 12:21:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:06.161 12:21:59 -- host/auth.sh@44 -- # digest=sha256 00:25:06.161 12:21:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.161 12:21:59 -- host/auth.sh@44 -- # keyid=3 00:25:06.161 12:21:59 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:06.161 12:21:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:06.161 12:21:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:06.161 12:21:59 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:06.161 12:21:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:25:06.161 12:21:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:06.161 12:21:59 -- host/auth.sh@68 -- # digest=sha256 00:25:06.161 12:21:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:06.161 12:21:59 -- host/auth.sh@68 -- # keyid=3 00:25:06.161 12:21:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.161 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.161 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.161 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:06.161 12:21:59 -- nvmf/common.sh@717 -- # local ip 00:25:06.161 12:21:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.161 12:21:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.161 12:21:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.161 12:21:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.161 12:21:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.161 12:21:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.161 12:21:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.161 12:21:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.161 12:21:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.161 12:21:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:06.161 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.161 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.161 nvme0n1 00:25:06.161 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.161 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.161 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.161 12:21:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:06.161 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.161 12:21:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.161 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.161 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.420 12:21:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:06.420 12:21:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:06.420 12:21:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:06.420 12:21:59 -- host/auth.sh@44 -- # digest=sha256 00:25:06.420 12:21:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.420 12:21:59 -- host/auth.sh@44 -- # keyid=4 00:25:06.420 12:21:59 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:06.420 12:21:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:06.420 12:21:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:06.420 12:21:59 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:06.420 12:21:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:25:06.420 12:21:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:06.420 12:21:59 -- host/auth.sh@68 -- # digest=sha256 00:25:06.420 12:21:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:06.420 12:21:59 -- host/auth.sh@68 -- # keyid=4 00:25:06.420 12:21:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:06.420 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.420 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.420 12:21:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:06.420 12:21:59 -- nvmf/common.sh@717 -- # local ip 00:25:06.420 12:21:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.420 12:21:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.420 12:21:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.420 12:21:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.420 12:21:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.420 12:21:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.420 12:21:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.420 12:21:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.420 12:21:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.420 12:21:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.420 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.420 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 nvme0n1 00:25:06.420 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.420 12:21:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.420 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.420 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 12:21:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:06.420 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.420 12:21:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.420 12:21:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.420 12:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.420 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 12:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.420 12:21:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.420 12:21:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:06.420 12:21:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:06.420 12:21:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:06.420 12:21:59 -- host/auth.sh@44 -- # digest=sha256 00:25:06.420 12:21:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.420 12:21:59 -- host/auth.sh@44 -- # keyid=0 00:25:06.420 12:21:59 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:06.420 12:21:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:06.420 12:21:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:06.990 12:22:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:06.990 12:22:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:25:06.990 12:22:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:06.990 12:22:00 -- host/auth.sh@68 -- # digest=sha256 00:25:06.990 12:22:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:06.990 12:22:00 -- host/auth.sh@68 -- # keyid=0 00:25:06.990 12:22:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.990 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.990 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:06.990 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.990 12:22:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:06.990 12:22:00 -- nvmf/common.sh@717 -- # local ip 00:25:06.990 12:22:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.990 12:22:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.990 12:22:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.990 12:22:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.990 12:22:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.990 12:22:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.990 12:22:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.990 12:22:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.990 12:22:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.990 12:22:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:06.990 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.990 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.248 nvme0n1 00:25:07.248 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.248 12:22:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.248 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.248 12:22:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.248 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.248 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.248 12:22:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.248 12:22:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.248 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.248 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.248 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.248 12:22:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:07.248 12:22:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:07.248 12:22:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:07.248 12:22:00 -- host/auth.sh@44 -- # digest=sha256 00:25:07.248 12:22:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.248 12:22:00 -- host/auth.sh@44 -- # keyid=1 00:25:07.248 12:22:00 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:07.248 12:22:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:07.248 12:22:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:07.248 12:22:00 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:07.248 12:22:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:25:07.248 12:22:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:07.248 12:22:00 -- host/auth.sh@68 -- # digest=sha256 00:25:07.248 12:22:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:07.248 12:22:00 -- host/auth.sh@68 -- # keyid=1 00:25:07.248 12:22:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.248 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.248 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.248 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.248 12:22:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:07.248 12:22:00 -- nvmf/common.sh@717 -- # local ip 00:25:07.249 12:22:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.249 12:22:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.249 12:22:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.249 12:22:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.249 12:22:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.249 12:22:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.249 12:22:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.249 12:22:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.249 12:22:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.249 12:22:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:07.249 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.249 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.507 nvme0n1 00:25:07.507 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.507 12:22:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.507 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.507 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.507 12:22:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.507 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.507 12:22:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.508 12:22:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.508 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.508 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.508 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.508 12:22:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:07.508 12:22:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:07.508 12:22:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:07.508 12:22:00 -- host/auth.sh@44 -- # digest=sha256 00:25:07.508 12:22:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.508 12:22:00 -- host/auth.sh@44 -- # keyid=2 00:25:07.508 12:22:00 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:07.508 12:22:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:07.508 12:22:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:07.508 12:22:00 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:07.508 12:22:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:25:07.508 12:22:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:07.508 12:22:00 -- host/auth.sh@68 -- # digest=sha256 00:25:07.508 12:22:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:07.508 12:22:00 -- host/auth.sh@68 -- # keyid=2 00:25:07.508 12:22:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.508 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.508 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.508 12:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.508 12:22:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:07.508 12:22:00 -- nvmf/common.sh@717 -- # local ip 00:25:07.508 12:22:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.508 12:22:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.508 12:22:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.508 12:22:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.508 12:22:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.508 12:22:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.508 12:22:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.508 12:22:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.508 12:22:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.508 12:22:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:07.508 12:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.508 12:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:07.766 nvme0n1 00:25:07.766 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.766 12:22:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.766 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.766 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:07.766 12:22:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.766 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.766 12:22:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.766 12:22:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.766 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.766 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:07.766 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.766 12:22:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:07.767 12:22:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:07.767 12:22:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:07.767 12:22:01 -- host/auth.sh@44 -- # digest=sha256 00:25:07.767 12:22:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.767 12:22:01 -- host/auth.sh@44 -- # keyid=3 00:25:07.767 12:22:01 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:07.767 12:22:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:07.767 12:22:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:07.767 12:22:01 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:07.767 12:22:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:25:07.767 12:22:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:07.767 12:22:01 -- host/auth.sh@68 -- # digest=sha256 00:25:07.767 12:22:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:07.767 12:22:01 -- host/auth.sh@68 -- # keyid=3 00:25:07.767 12:22:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.767 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.767 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:07.767 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.767 12:22:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:07.767 12:22:01 -- nvmf/common.sh@717 -- # local ip 00:25:07.767 12:22:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.767 12:22:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.767 12:22:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.767 12:22:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.767 12:22:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.767 12:22:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.767 12:22:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.767 12:22:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.767 12:22:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.767 12:22:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:07.767 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.767 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.025 nvme0n1 00:25:08.025 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.025 12:22:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.025 12:22:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.025 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.025 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.025 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.025 12:22:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.025 12:22:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.025 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.025 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.025 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.025 12:22:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.025 12:22:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:08.025 12:22:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.025 12:22:01 -- host/auth.sh@44 -- # digest=sha256 00:25:08.025 12:22:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.025 12:22:01 -- host/auth.sh@44 -- # keyid=4 00:25:08.026 12:22:01 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:08.026 12:22:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:08.026 12:22:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:08.026 12:22:01 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:08.026 12:22:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:25:08.026 12:22:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:08.026 12:22:01 -- host/auth.sh@68 -- # digest=sha256 00:25:08.026 12:22:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:08.026 12:22:01 -- host/auth.sh@68 -- # keyid=4 00:25:08.026 12:22:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:08.026 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.026 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.026 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.026 12:22:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:08.026 12:22:01 -- nvmf/common.sh@717 -- # local ip 00:25:08.026 12:22:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.026 12:22:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.026 12:22:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.026 12:22:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.026 12:22:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.026 12:22:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.026 12:22:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.026 12:22:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.026 12:22:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.026 12:22:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.026 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.026 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.284 nvme0n1 00:25:08.284 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.284 12:22:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.284 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.284 12:22:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.284 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.284 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.284 12:22:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.284 12:22:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.284 12:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.284 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.284 12:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.284 12:22:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.284 12:22:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.284 12:22:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:08.284 12:22:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.284 12:22:01 -- host/auth.sh@44 -- # digest=sha256 00:25:08.284 12:22:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.284 12:22:01 -- host/auth.sh@44 -- # keyid=0 00:25:08.284 12:22:01 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:08.284 12:22:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:08.284 12:22:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:10.183 12:22:03 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:10.183 12:22:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:25:10.183 12:22:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:10.183 12:22:03 -- host/auth.sh@68 -- # digest=sha256 00:25:10.183 12:22:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:10.183 12:22:03 -- host/auth.sh@68 -- # keyid=0 00:25:10.183 12:22:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:10.183 12:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.183 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.183 12:22:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.183 12:22:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:10.183 12:22:03 -- nvmf/common.sh@717 -- # local ip 00:25:10.183 12:22:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.183 12:22:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.183 12:22:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.183 12:22:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.183 12:22:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.183 12:22:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.183 12:22:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.183 12:22:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.183 12:22:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.183 12:22:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:10.183 12:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.183 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.441 nvme0n1 00:25:10.441 12:22:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.441 12:22:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.441 12:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.441 12:22:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:10.441 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.441 12:22:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.441 12:22:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.441 12:22:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.441 12:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.441 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.441 12:22:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.441 12:22:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:10.441 12:22:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:10.441 12:22:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:10.441 12:22:03 -- host/auth.sh@44 -- # digest=sha256 00:25:10.441 12:22:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.441 12:22:03 -- host/auth.sh@44 -- # keyid=1 00:25:10.441 12:22:03 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:10.441 12:22:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:10.441 12:22:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:10.441 12:22:03 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:10.441 12:22:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:25:10.441 12:22:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:10.441 12:22:03 -- host/auth.sh@68 -- # digest=sha256 00:25:10.441 12:22:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:10.441 12:22:03 -- host/auth.sh@68 -- # keyid=1 00:25:10.441 12:22:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:10.441 12:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.441 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.441 12:22:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.441 12:22:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:10.441 12:22:03 -- nvmf/common.sh@717 -- # local ip 00:25:10.441 12:22:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.441 12:22:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.441 12:22:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.441 12:22:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.441 12:22:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.441 12:22:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.441 12:22:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.441 12:22:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.441 12:22:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.441 12:22:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:10.441 12:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.441 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:11.007 nvme0n1 00:25:11.007 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.007 12:22:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.007 12:22:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.007 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.007 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.007 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.007 12:22:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.007 12:22:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.007 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.007 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.007 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.007 12:22:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.007 12:22:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:11.007 12:22:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.007 12:22:04 -- host/auth.sh@44 -- # digest=sha256 00:25:11.007 12:22:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.007 12:22:04 -- host/auth.sh@44 -- # keyid=2 00:25:11.007 12:22:04 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:11.007 12:22:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:11.007 12:22:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:11.007 12:22:04 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:11.007 12:22:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:25:11.007 12:22:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.007 12:22:04 -- host/auth.sh@68 -- # digest=sha256 00:25:11.007 12:22:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:11.007 12:22:04 -- host/auth.sh@68 -- # keyid=2 00:25:11.007 12:22:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:11.007 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.007 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.007 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.007 12:22:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.007 12:22:04 -- nvmf/common.sh@717 -- # local ip 00:25:11.007 12:22:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.007 12:22:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.007 12:22:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.007 12:22:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.007 12:22:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.007 12:22:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.007 12:22:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.007 12:22:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.007 12:22:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.007 12:22:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:11.007 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.007 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.265 nvme0n1 00:25:11.265 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.265 12:22:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.265 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.265 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.265 12:22:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.265 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.265 12:22:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.265 12:22:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.265 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.265 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.523 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.523 12:22:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.523 12:22:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:11.523 12:22:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.523 12:22:04 -- host/auth.sh@44 -- # digest=sha256 00:25:11.523 12:22:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.523 12:22:04 -- host/auth.sh@44 -- # keyid=3 00:25:11.523 12:22:04 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:11.523 12:22:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:11.523 12:22:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:11.523 12:22:04 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:11.523 12:22:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:25:11.523 12:22:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.523 12:22:04 -- host/auth.sh@68 -- # digest=sha256 00:25:11.523 12:22:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:11.523 12:22:04 -- host/auth.sh@68 -- # keyid=3 00:25:11.523 12:22:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:11.523 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.523 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.523 12:22:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.523 12:22:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.523 12:22:04 -- nvmf/common.sh@717 -- # local ip 00:25:11.523 12:22:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.523 12:22:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.523 12:22:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.523 12:22:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.523 12:22:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.523 12:22:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.523 12:22:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.523 12:22:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.523 12:22:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.523 12:22:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:11.523 12:22:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.523 12:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.782 nvme0n1 00:25:11.782 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.782 12:22:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.782 12:22:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.782 12:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.782 12:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:11.782 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.782 12:22:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.782 12:22:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.782 12:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.782 12:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:11.782 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.782 12:22:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.782 12:22:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:11.782 12:22:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.782 12:22:05 -- host/auth.sh@44 -- # digest=sha256 00:25:11.782 12:22:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.782 12:22:05 -- host/auth.sh@44 -- # keyid=4 00:25:11.782 12:22:05 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:11.782 12:22:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:11.782 12:22:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:11.782 12:22:05 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:11.782 12:22:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:25:11.782 12:22:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.782 12:22:05 -- host/auth.sh@68 -- # digest=sha256 00:25:11.782 12:22:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:11.782 12:22:05 -- host/auth.sh@68 -- # keyid=4 00:25:11.782 12:22:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:11.782 12:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.782 12:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:11.782 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.782 12:22:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.782 12:22:05 -- nvmf/common.sh@717 -- # local ip 00:25:11.782 12:22:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.782 12:22:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.782 12:22:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.782 12:22:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.782 12:22:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.782 12:22:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.782 12:22:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.782 12:22:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.782 12:22:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.782 12:22:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.782 12:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.782 12:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:12.353 nvme0n1 00:25:12.353 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.353 12:22:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.353 12:22:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:12.353 12:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.353 12:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:12.353 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.353 12:22:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.353 12:22:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.353 12:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.353 12:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:12.353 12:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.353 12:22:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.353 12:22:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:12.353 12:22:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:12.353 12:22:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:12.353 12:22:05 -- host/auth.sh@44 -- # digest=sha256 00:25:12.353 12:22:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.353 12:22:05 -- host/auth.sh@44 -- # keyid=0 00:25:12.353 12:22:05 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:12.353 12:22:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:12.353 12:22:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:16.536 12:22:09 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:16.536 12:22:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:25:16.536 12:22:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:16.536 12:22:09 -- host/auth.sh@68 -- # digest=sha256 00:25:16.536 12:22:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:16.536 12:22:09 -- host/auth.sh@68 -- # keyid=0 00:25:16.536 12:22:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.536 12:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.536 12:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.536 12:22:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.536 12:22:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:16.536 12:22:09 -- nvmf/common.sh@717 -- # local ip 00:25:16.536 12:22:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.536 12:22:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.536 12:22:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.536 12:22:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.536 12:22:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.536 12:22:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.536 12:22:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.536 12:22:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.536 12:22:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.536 12:22:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:16.536 12:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.536 12:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.536 nvme0n1 00:25:16.536 12:22:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.536 12:22:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.536 12:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.536 12:22:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:16.536 12:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.536 12:22:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.536 12:22:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.536 12:22:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.536 12:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.536 12:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.536 12:22:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.536 12:22:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:16.536 12:22:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:16.536 12:22:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:16.536 12:22:09 -- host/auth.sh@44 -- # digest=sha256 00:25:16.536 12:22:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.536 12:22:09 -- host/auth.sh@44 -- # keyid=1 00:25:16.536 12:22:09 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:16.536 12:22:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:16.536 12:22:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:16.536 12:22:09 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:16.536 12:22:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:25:16.536 12:22:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:16.536 12:22:09 -- host/auth.sh@68 -- # digest=sha256 00:25:16.536 12:22:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:16.536 12:22:09 -- host/auth.sh@68 -- # keyid=1 00:25:16.536 12:22:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:16.536 12:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.536 12:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.536 12:22:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.536 12:22:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:16.536 12:22:09 -- nvmf/common.sh@717 -- # local ip 00:25:16.536 12:22:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.536 12:22:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.536 12:22:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.536 12:22:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.536 12:22:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.536 12:22:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.536 12:22:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.536 12:22:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.536 12:22:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.536 12:22:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:16.536 12:22:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.536 12:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:17.470 nvme0n1 00:25:17.470 12:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.470 12:22:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.470 12:22:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:17.470 12:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.470 12:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:17.470 12:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.470 12:22:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.470 12:22:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.470 12:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.470 12:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:17.470 12:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.470 12:22:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:17.470 12:22:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:17.470 12:22:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:17.470 12:22:10 -- host/auth.sh@44 -- # digest=sha256 00:25:17.470 12:22:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.470 12:22:10 -- host/auth.sh@44 -- # keyid=2 00:25:17.470 12:22:10 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:17.470 12:22:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:17.470 12:22:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:17.470 12:22:10 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:17.470 12:22:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:25:17.470 12:22:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:17.470 12:22:10 -- host/auth.sh@68 -- # digest=sha256 00:25:17.470 12:22:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:17.470 12:22:10 -- host/auth.sh@68 -- # keyid=2 00:25:17.470 12:22:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.470 12:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.470 12:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:17.470 12:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.470 12:22:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:17.470 12:22:10 -- nvmf/common.sh@717 -- # local ip 00:25:17.470 12:22:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:17.470 12:22:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:17.470 12:22:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.470 12:22:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.470 12:22:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:17.470 12:22:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.470 12:22:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:17.470 12:22:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:17.470 12:22:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:17.470 12:22:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.470 12:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.470 12:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 nvme0n1 00:25:18.046 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.046 12:22:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.046 12:22:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:18.046 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.046 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.046 12:22:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.046 12:22:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.046 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.046 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.046 12:22:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:18.046 12:22:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:18.046 12:22:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:18.046 12:22:11 -- host/auth.sh@44 -- # digest=sha256 00:25:18.046 12:22:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.046 12:22:11 -- host/auth.sh@44 -- # keyid=3 00:25:18.046 12:22:11 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:18.046 12:22:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:18.046 12:22:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:18.046 12:22:11 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:18.046 12:22:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:25:18.046 12:22:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:18.046 12:22:11 -- host/auth.sh@68 -- # digest=sha256 00:25:18.046 12:22:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:18.046 12:22:11 -- host/auth.sh@68 -- # keyid=3 00:25:18.046 12:22:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.046 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.046 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.046 12:22:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:18.046 12:22:11 -- nvmf/common.sh@717 -- # local ip 00:25:18.046 12:22:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.046 12:22:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.046 12:22:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.046 12:22:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.046 12:22:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.046 12:22:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.046 12:22:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.046 12:22:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.046 12:22:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.046 12:22:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:18.046 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.046 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.612 nvme0n1 00:25:18.612 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.612 12:22:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.612 12:22:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:18.612 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.612 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.612 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.612 12:22:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.612 12:22:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.612 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.612 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.612 12:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.612 12:22:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:18.612 12:22:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:18.612 12:22:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:18.612 12:22:11 -- host/auth.sh@44 -- # digest=sha256 00:25:18.612 12:22:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.612 12:22:11 -- host/auth.sh@44 -- # keyid=4 00:25:18.612 12:22:11 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:18.612 12:22:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:18.612 12:22:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:18.612 12:22:11 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:18.612 12:22:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:25:18.612 12:22:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:18.612 12:22:11 -- host/auth.sh@68 -- # digest=sha256 00:25:18.612 12:22:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:18.612 12:22:11 -- host/auth.sh@68 -- # keyid=4 00:25:18.612 12:22:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.612 12:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.612 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:18.612 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.612 12:22:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:18.612 12:22:12 -- nvmf/common.sh@717 -- # local ip 00:25:18.612 12:22:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.612 12:22:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.612 12:22:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.612 12:22:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.612 12:22:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.612 12:22:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.612 12:22:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.612 12:22:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.612 12:22:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.612 12:22:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.612 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.612 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 nvme0n1 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:19.546 12:22:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.546 12:22:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.546 12:22:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:19.546 12:22:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.546 12:22:12 -- host/auth.sh@44 -- # digest=sha384 00:25:19.546 12:22:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.546 12:22:12 -- host/auth.sh@44 -- # keyid=0 00:25:19.546 12:22:12 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:19.546 12:22:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:19.546 12:22:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:19.546 12:22:12 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:19.546 12:22:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:25:19.546 12:22:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.546 12:22:12 -- host/auth.sh@68 -- # digest=sha384 00:25:19.546 12:22:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:19.546 12:22:12 -- host/auth.sh@68 -- # keyid=0 00:25:19.546 12:22:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.546 12:22:12 -- nvmf/common.sh@717 -- # local ip 00:25:19.546 12:22:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.546 12:22:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.546 12:22:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.546 12:22:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.546 12:22:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.546 12:22:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.546 12:22:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.546 12:22:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.546 12:22:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.546 12:22:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 nvme0n1 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 12:22:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.546 12:22:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:19.546 12:22:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.546 12:22:12 -- host/auth.sh@44 -- # digest=sha384 00:25:19.546 12:22:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.546 12:22:12 -- host/auth.sh@44 -- # keyid=1 00:25:19.546 12:22:12 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:19.546 12:22:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:19.546 12:22:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:19.546 12:22:12 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:19.546 12:22:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:25:19.546 12:22:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.546 12:22:12 -- host/auth.sh@68 -- # digest=sha384 00:25:19.546 12:22:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:19.546 12:22:12 -- host/auth.sh@68 -- # keyid=1 00:25:19.546 12:22:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.546 12:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.546 12:22:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.546 12:22:12 -- nvmf/common.sh@717 -- # local ip 00:25:19.546 12:22:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.546 12:22:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.546 12:22:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.546 12:22:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.546 12:22:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.546 12:22:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.546 12:22:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.546 12:22:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.546 12:22:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.546 12:22:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:19.546 12:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.546 12:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 nvme0n1 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 12:22:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.805 12:22:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:19.805 12:22:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.805 12:22:13 -- host/auth.sh@44 -- # digest=sha384 00:25:19.805 12:22:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.805 12:22:13 -- host/auth.sh@44 -- # keyid=2 00:25:19.805 12:22:13 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:19.805 12:22:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:19.805 12:22:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:19.805 12:22:13 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:19.805 12:22:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:25:19.805 12:22:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.805 12:22:13 -- host/auth.sh@68 -- # digest=sha384 00:25:19.805 12:22:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:19.805 12:22:13 -- host/auth.sh@68 -- # keyid=2 00:25:19.805 12:22:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.805 12:22:13 -- nvmf/common.sh@717 -- # local ip 00:25:19.805 12:22:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.805 12:22:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.805 12:22:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.805 12:22:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.805 12:22:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.805 12:22:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.805 12:22:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.805 12:22:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.805 12:22:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.805 12:22:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 nvme0n1 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.805 12:22:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.805 12:22:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.805 12:22:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:19.805 12:22:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.805 12:22:13 -- host/auth.sh@44 -- # digest=sha384 00:25:19.805 12:22:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.805 12:22:13 -- host/auth.sh@44 -- # keyid=3 00:25:19.805 12:22:13 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:19.805 12:22:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:19.805 12:22:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:19.805 12:22:13 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:19.805 12:22:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:25:19.805 12:22:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.805 12:22:13 -- host/auth.sh@68 -- # digest=sha384 00:25:19.805 12:22:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:19.805 12:22:13 -- host/auth.sh@68 -- # keyid=3 00:25:19.805 12:22:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:19.805 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.805 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:19.805 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.063 12:22:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.063 12:22:13 -- nvmf/common.sh@717 -- # local ip 00:25:20.063 12:22:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.063 12:22:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.063 12:22:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.063 12:22:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.063 12:22:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.063 12:22:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.063 12:22:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.063 12:22:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.063 12:22:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.063 12:22:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:20.063 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.063 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.063 nvme0n1 00:25:20.063 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.063 12:22:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.063 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.063 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.063 12:22:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.063 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.063 12:22:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.063 12:22:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.063 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.063 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.063 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.063 12:22:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.063 12:22:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:20.063 12:22:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.063 12:22:13 -- host/auth.sh@44 -- # digest=sha384 00:25:20.063 12:22:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.063 12:22:13 -- host/auth.sh@44 -- # keyid=4 00:25:20.063 12:22:13 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:20.063 12:22:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:20.063 12:22:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:20.063 12:22:13 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:20.063 12:22:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:25:20.063 12:22:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.063 12:22:13 -- host/auth.sh@68 -- # digest=sha384 00:25:20.063 12:22:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:20.063 12:22:13 -- host/auth.sh@68 -- # keyid=4 00:25:20.063 12:22:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.063 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.063 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.063 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.063 12:22:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.063 12:22:13 -- nvmf/common.sh@717 -- # local ip 00:25:20.063 12:22:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.063 12:22:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.063 12:22:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.063 12:22:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.063 12:22:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.063 12:22:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.063 12:22:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.063 12:22:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.063 12:22:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.063 12:22:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.063 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.063 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 nvme0n1 00:25:20.322 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.322 12:22:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.322 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.322 12:22:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.322 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.322 12:22:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.322 12:22:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.322 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.322 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.322 12:22:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.322 12:22:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.322 12:22:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:20.322 12:22:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.322 12:22:13 -- host/auth.sh@44 -- # digest=sha384 00:25:20.322 12:22:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.322 12:22:13 -- host/auth.sh@44 -- # keyid=0 00:25:20.322 12:22:13 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:20.322 12:22:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:20.322 12:22:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.322 12:22:13 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:20.322 12:22:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:25:20.322 12:22:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.322 12:22:13 -- host/auth.sh@68 -- # digest=sha384 00:25:20.322 12:22:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.322 12:22:13 -- host/auth.sh@68 -- # keyid=0 00:25:20.322 12:22:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.322 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.322 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.322 12:22:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.322 12:22:13 -- nvmf/common.sh@717 -- # local ip 00:25:20.322 12:22:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.322 12:22:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.322 12:22:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.322 12:22:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.322 12:22:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.322 12:22:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.322 12:22:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.322 12:22:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.322 12:22:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.322 12:22:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:20.322 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.322 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 nvme0n1 00:25:20.322 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.322 12:22:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.322 12:22:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.322 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.322 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.322 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.582 12:22:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.582 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.582 12:22:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:20.582 12:22:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.582 12:22:13 -- host/auth.sh@44 -- # digest=sha384 00:25:20.582 12:22:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.582 12:22:13 -- host/auth.sh@44 -- # keyid=1 00:25:20.582 12:22:13 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:20.582 12:22:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:20.582 12:22:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.582 12:22:13 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:20.582 12:22:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:25:20.582 12:22:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.582 12:22:13 -- host/auth.sh@68 -- # digest=sha384 00:25:20.582 12:22:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.582 12:22:13 -- host/auth.sh@68 -- # keyid=1 00:25:20.582 12:22:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.582 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.582 12:22:13 -- nvmf/common.sh@717 -- # local ip 00:25:20.582 12:22:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.582 12:22:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.582 12:22:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.582 12:22:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.582 12:22:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.582 12:22:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.582 12:22:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.582 12:22:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.582 12:22:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.582 12:22:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:20.582 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 nvme0n1 00:25:20.582 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.582 12:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.582 12:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 12:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.582 12:22:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.582 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.582 12:22:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:20.582 12:22:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.582 12:22:14 -- host/auth.sh@44 -- # digest=sha384 00:25:20.582 12:22:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.582 12:22:14 -- host/auth.sh@44 -- # keyid=2 00:25:20.582 12:22:14 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:20.582 12:22:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:20.582 12:22:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.582 12:22:14 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:20.582 12:22:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:25:20.582 12:22:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.582 12:22:14 -- host/auth.sh@68 -- # digest=sha384 00:25:20.582 12:22:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.582 12:22:14 -- host/auth.sh@68 -- # keyid=2 00:25:20.582 12:22:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.582 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.582 12:22:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.582 12:22:14 -- nvmf/common.sh@717 -- # local ip 00:25:20.582 12:22:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.582 12:22:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.582 12:22:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.582 12:22:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.582 12:22:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.582 12:22:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.582 12:22:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.582 12:22:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.582 12:22:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.582 12:22:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:20.582 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.582 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.841 nvme0n1 00:25:20.841 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.841 12:22:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.841 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.841 12:22:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.841 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.841 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.841 12:22:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.841 12:22:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.841 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.841 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.841 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.841 12:22:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.841 12:22:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:20.841 12:22:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.841 12:22:14 -- host/auth.sh@44 -- # digest=sha384 00:25:20.841 12:22:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.841 12:22:14 -- host/auth.sh@44 -- # keyid=3 00:25:20.841 12:22:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:20.841 12:22:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:20.841 12:22:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.841 12:22:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:20.841 12:22:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:25:20.841 12:22:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.841 12:22:14 -- host/auth.sh@68 -- # digest=sha384 00:25:20.841 12:22:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.841 12:22:14 -- host/auth.sh@68 -- # keyid=3 00:25:20.841 12:22:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:20.841 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.841 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.841 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.841 12:22:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.841 12:22:14 -- nvmf/common.sh@717 -- # local ip 00:25:20.841 12:22:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.841 12:22:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.841 12:22:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.841 12:22:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.841 12:22:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.841 12:22:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.841 12:22:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.841 12:22:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.841 12:22:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.841 12:22:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:20.841 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.841 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.100 nvme0n1 00:25:21.100 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.100 12:22:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.100 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.100 12:22:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.100 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.100 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.100 12:22:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.100 12:22:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.100 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.100 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.100 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.100 12:22:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.100 12:22:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:21.100 12:22:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.100 12:22:14 -- host/auth.sh@44 -- # digest=sha384 00:25:21.100 12:22:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.100 12:22:14 -- host/auth.sh@44 -- # keyid=4 00:25:21.100 12:22:14 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:21.100 12:22:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:21.100 12:22:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:21.100 12:22:14 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:21.100 12:22:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:25:21.100 12:22:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.100 12:22:14 -- host/auth.sh@68 -- # digest=sha384 00:25:21.100 12:22:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:21.100 12:22:14 -- host/auth.sh@68 -- # keyid=4 00:25:21.100 12:22:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.100 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.100 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.100 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.100 12:22:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.100 12:22:14 -- nvmf/common.sh@717 -- # local ip 00:25:21.100 12:22:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.100 12:22:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.100 12:22:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.100 12:22:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.100 12:22:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.100 12:22:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.100 12:22:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.100 12:22:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.100 12:22:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.100 12:22:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.100 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.100 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.359 nvme0n1 00:25:21.359 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.359 12:22:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.359 12:22:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.359 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.359 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.359 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.359 12:22:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.359 12:22:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.359 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.359 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.359 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.359 12:22:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.359 12:22:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.359 12:22:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:21.359 12:22:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.359 12:22:14 -- host/auth.sh@44 -- # digest=sha384 00:25:21.359 12:22:14 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.359 12:22:14 -- host/auth.sh@44 -- # keyid=0 00:25:21.359 12:22:14 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:21.359 12:22:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:21.359 12:22:14 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:21.359 12:22:14 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:21.359 12:22:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:25:21.359 12:22:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.360 12:22:14 -- host/auth.sh@68 -- # digest=sha384 00:25:21.360 12:22:14 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:21.360 12:22:14 -- host/auth.sh@68 -- # keyid=0 00:25:21.360 12:22:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.360 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.360 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.360 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.360 12:22:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.360 12:22:14 -- nvmf/common.sh@717 -- # local ip 00:25:21.360 12:22:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.360 12:22:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.360 12:22:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.360 12:22:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.360 12:22:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.360 12:22:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.360 12:22:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.360 12:22:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.360 12:22:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.360 12:22:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:21.360 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.360 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 nvme0n1 00:25:21.621 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.621 12:22:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.621 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.621 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 12:22:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.621 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.621 12:22:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.621 12:22:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.621 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.621 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.621 12:22:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.621 12:22:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:21.621 12:22:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.621 12:22:14 -- host/auth.sh@44 -- # digest=sha384 00:25:21.621 12:22:14 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.621 12:22:14 -- host/auth.sh@44 -- # keyid=1 00:25:21.621 12:22:14 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:21.621 12:22:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:21.621 12:22:14 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:21.621 12:22:14 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:21.621 12:22:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:25:21.621 12:22:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.621 12:22:14 -- host/auth.sh@68 -- # digest=sha384 00:25:21.621 12:22:14 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:21.621 12:22:14 -- host/auth.sh@68 -- # keyid=1 00:25:21.621 12:22:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.621 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.621 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 12:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.621 12:22:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.621 12:22:14 -- nvmf/common.sh@717 -- # local ip 00:25:21.621 12:22:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.621 12:22:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.621 12:22:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.621 12:22:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.621 12:22:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.621 12:22:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.621 12:22:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.621 12:22:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.621 12:22:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.621 12:22:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:21.621 12:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.621 12:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 nvme0n1 00:25:21.880 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.880 12:22:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.880 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.880 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 12:22:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.880 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.880 12:22:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.880 12:22:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.880 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.880 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.880 12:22:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.880 12:22:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:21.880 12:22:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.880 12:22:15 -- host/auth.sh@44 -- # digest=sha384 00:25:21.880 12:22:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.880 12:22:15 -- host/auth.sh@44 -- # keyid=2 00:25:21.880 12:22:15 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:21.880 12:22:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:21.880 12:22:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:21.880 12:22:15 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:21.880 12:22:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:25:21.880 12:22:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.880 12:22:15 -- host/auth.sh@68 -- # digest=sha384 00:25:21.880 12:22:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:21.880 12:22:15 -- host/auth.sh@68 -- # keyid=2 00:25:21.880 12:22:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.880 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.880 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.880 12:22:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.880 12:22:15 -- nvmf/common.sh@717 -- # local ip 00:25:21.880 12:22:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.880 12:22:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.880 12:22:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.880 12:22:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.880 12:22:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.880 12:22:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.880 12:22:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.880 12:22:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.880 12:22:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.880 12:22:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:21.880 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.880 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.140 nvme0n1 00:25:22.140 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.140 12:22:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.140 12:22:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.140 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.140 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.140 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.140 12:22:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.140 12:22:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.140 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.140 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.140 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.140 12:22:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.140 12:22:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:22.140 12:22:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.140 12:22:15 -- host/auth.sh@44 -- # digest=sha384 00:25:22.140 12:22:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.140 12:22:15 -- host/auth.sh@44 -- # keyid=3 00:25:22.140 12:22:15 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:22.140 12:22:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:22.140 12:22:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:22.140 12:22:15 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:22.140 12:22:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:25:22.140 12:22:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.140 12:22:15 -- host/auth.sh@68 -- # digest=sha384 00:25:22.140 12:22:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:22.140 12:22:15 -- host/auth.sh@68 -- # keyid=3 00:25:22.140 12:22:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.140 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.140 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.140 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.140 12:22:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.140 12:22:15 -- nvmf/common.sh@717 -- # local ip 00:25:22.140 12:22:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.140 12:22:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.140 12:22:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.140 12:22:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.140 12:22:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.140 12:22:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.140 12:22:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.140 12:22:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.140 12:22:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.140 12:22:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:22.140 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.140 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.399 nvme0n1 00:25:22.399 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.399 12:22:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.399 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.399 12:22:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.399 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.399 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.399 12:22:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.399 12:22:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.399 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.399 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.399 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.399 12:22:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.399 12:22:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:22.399 12:22:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.399 12:22:15 -- host/auth.sh@44 -- # digest=sha384 00:25:22.399 12:22:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.399 12:22:15 -- host/auth.sh@44 -- # keyid=4 00:25:22.399 12:22:15 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:22.399 12:22:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:22.399 12:22:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:22.399 12:22:15 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:22.399 12:22:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:25:22.399 12:22:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.399 12:22:15 -- host/auth.sh@68 -- # digest=sha384 00:25:22.399 12:22:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:22.399 12:22:15 -- host/auth.sh@68 -- # keyid=4 00:25:22.399 12:22:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.399 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.399 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.399 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.399 12:22:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.399 12:22:15 -- nvmf/common.sh@717 -- # local ip 00:25:22.399 12:22:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.399 12:22:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.399 12:22:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.399 12:22:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.399 12:22:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.399 12:22:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.399 12:22:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.399 12:22:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.399 12:22:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.399 12:22:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.399 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.399 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.658 nvme0n1 00:25:22.658 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.658 12:22:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.658 12:22:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.658 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.658 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.658 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.658 12:22:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.658 12:22:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.658 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.658 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.658 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.658 12:22:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.658 12:22:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.658 12:22:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:22.658 12:22:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.658 12:22:15 -- host/auth.sh@44 -- # digest=sha384 00:25:22.658 12:22:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.658 12:22:15 -- host/auth.sh@44 -- # keyid=0 00:25:22.658 12:22:15 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:22.658 12:22:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:22.658 12:22:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:22.658 12:22:15 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:22.658 12:22:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:25:22.658 12:22:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.658 12:22:15 -- host/auth.sh@68 -- # digest=sha384 00:25:22.658 12:22:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:22.658 12:22:15 -- host/auth.sh@68 -- # keyid=0 00:25:22.658 12:22:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:22.658 12:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.658 12:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:22.658 12:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.658 12:22:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.658 12:22:16 -- nvmf/common.sh@717 -- # local ip 00:25:22.658 12:22:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.658 12:22:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.658 12:22:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.658 12:22:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.658 12:22:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.658 12:22:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.658 12:22:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.658 12:22:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.658 12:22:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.658 12:22:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:22.658 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.658 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:22.918 nvme0n1 00:25:22.918 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.918 12:22:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.918 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.918 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:22.918 12:22:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.918 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.176 12:22:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.176 12:22:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.176 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.176 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.176 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.176 12:22:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:23.176 12:22:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:23.176 12:22:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:23.176 12:22:16 -- host/auth.sh@44 -- # digest=sha384 00:25:23.176 12:22:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.176 12:22:16 -- host/auth.sh@44 -- # keyid=1 00:25:23.176 12:22:16 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:23.176 12:22:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:23.176 12:22:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:23.176 12:22:16 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:23.176 12:22:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:25:23.176 12:22:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:23.176 12:22:16 -- host/auth.sh@68 -- # digest=sha384 00:25:23.176 12:22:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:23.176 12:22:16 -- host/auth.sh@68 -- # keyid=1 00:25:23.176 12:22:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.176 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.176 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.176 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.176 12:22:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:23.176 12:22:16 -- nvmf/common.sh@717 -- # local ip 00:25:23.176 12:22:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:23.176 12:22:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:23.176 12:22:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.176 12:22:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.176 12:22:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:23.176 12:22:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.176 12:22:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:23.176 12:22:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:23.176 12:22:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:23.176 12:22:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:23.176 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.176 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.435 nvme0n1 00:25:23.435 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.435 12:22:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.435 12:22:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:23.435 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.435 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.435 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.435 12:22:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.435 12:22:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.435 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.435 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.435 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.435 12:22:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:23.435 12:22:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:23.435 12:22:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:23.435 12:22:16 -- host/auth.sh@44 -- # digest=sha384 00:25:23.435 12:22:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.436 12:22:16 -- host/auth.sh@44 -- # keyid=2 00:25:23.436 12:22:16 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:23.436 12:22:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:23.436 12:22:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:23.436 12:22:16 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:23.436 12:22:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:25:23.436 12:22:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:23.436 12:22:16 -- host/auth.sh@68 -- # digest=sha384 00:25:23.436 12:22:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:23.436 12:22:16 -- host/auth.sh@68 -- # keyid=2 00:25:23.436 12:22:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:23.436 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.436 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.436 12:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.436 12:22:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:23.436 12:22:16 -- nvmf/common.sh@717 -- # local ip 00:25:23.436 12:22:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:23.436 12:22:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:23.436 12:22:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.436 12:22:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.436 12:22:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:23.436 12:22:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.436 12:22:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:23.436 12:22:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:23.436 12:22:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:23.436 12:22:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.436 12:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.436 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:24.003 nvme0n1 00:25:24.003 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.003 12:22:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.003 12:22:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.003 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.003 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.003 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.003 12:22:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.003 12:22:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.003 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.003 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.003 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.003 12:22:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.003 12:22:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:24.003 12:22:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.003 12:22:17 -- host/auth.sh@44 -- # digest=sha384 00:25:24.003 12:22:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.003 12:22:17 -- host/auth.sh@44 -- # keyid=3 00:25:24.003 12:22:17 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:24.003 12:22:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:24.003 12:22:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:24.003 12:22:17 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:24.003 12:22:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:25:24.003 12:22:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.004 12:22:17 -- host/auth.sh@68 -- # digest=sha384 00:25:24.004 12:22:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:24.004 12:22:17 -- host/auth.sh@68 -- # keyid=3 00:25:24.004 12:22:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.004 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.004 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.004 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.004 12:22:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.004 12:22:17 -- nvmf/common.sh@717 -- # local ip 00:25:24.004 12:22:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.004 12:22:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.004 12:22:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.004 12:22:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.004 12:22:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.004 12:22:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.004 12:22:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.004 12:22:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.004 12:22:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.004 12:22:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:24.004 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.004 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 nvme0n1 00:25:24.263 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.263 12:22:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.263 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.263 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 12:22:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.263 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.263 12:22:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.263 12:22:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.263 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.263 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.263 12:22:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.263 12:22:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:24.263 12:22:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.263 12:22:17 -- host/auth.sh@44 -- # digest=sha384 00:25:24.263 12:22:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.263 12:22:17 -- host/auth.sh@44 -- # keyid=4 00:25:24.263 12:22:17 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:24.263 12:22:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:24.263 12:22:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:24.263 12:22:17 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:24.263 12:22:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:25:24.263 12:22:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.263 12:22:17 -- host/auth.sh@68 -- # digest=sha384 00:25:24.263 12:22:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:24.263 12:22:17 -- host/auth.sh@68 -- # keyid=4 00:25:24.263 12:22:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.263 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.263 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.263 12:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.263 12:22:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.263 12:22:17 -- nvmf/common.sh@717 -- # local ip 00:25:24.263 12:22:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.263 12:22:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.263 12:22:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.263 12:22:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.263 12:22:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.263 12:22:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.263 12:22:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.263 12:22:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.263 12:22:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.263 12:22:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.263 12:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.263 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:24.831 nvme0n1 00:25:24.831 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.831 12:22:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.831 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.831 12:22:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.831 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:24.831 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.831 12:22:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.832 12:22:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.832 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.832 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:24.832 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.832 12:22:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.832 12:22:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.832 12:22:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:24.832 12:22:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.832 12:22:18 -- host/auth.sh@44 -- # digest=sha384 00:25:24.832 12:22:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.832 12:22:18 -- host/auth.sh@44 -- # keyid=0 00:25:24.832 12:22:18 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:24.832 12:22:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:24.832 12:22:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:24.832 12:22:18 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:24.832 12:22:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:25:24.832 12:22:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.832 12:22:18 -- host/auth.sh@68 -- # digest=sha384 00:25:24.832 12:22:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:24.832 12:22:18 -- host/auth.sh@68 -- # keyid=0 00:25:24.832 12:22:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:24.832 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.832 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:24.832 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.832 12:22:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.832 12:22:18 -- nvmf/common.sh@717 -- # local ip 00:25:24.832 12:22:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.832 12:22:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.832 12:22:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.832 12:22:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.832 12:22:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.832 12:22:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.832 12:22:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.832 12:22:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.832 12:22:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.832 12:22:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:24.832 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.832 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:25.398 nvme0n1 00:25:25.398 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.398 12:22:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.398 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.398 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:25.398 12:22:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:25.398 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.398 12:22:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.398 12:22:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.398 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.398 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:25.398 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.398 12:22:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:25.398 12:22:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:25.398 12:22:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:25.398 12:22:18 -- host/auth.sh@44 -- # digest=sha384 00:25:25.398 12:22:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.398 12:22:18 -- host/auth.sh@44 -- # keyid=1 00:25:25.398 12:22:18 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:25.398 12:22:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:25.398 12:22:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:25.398 12:22:18 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:25.398 12:22:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:25:25.398 12:22:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:25.398 12:22:18 -- host/auth.sh@68 -- # digest=sha384 00:25:25.398 12:22:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:25.398 12:22:18 -- host/auth.sh@68 -- # keyid=1 00:25:25.398 12:22:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:25.398 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.398 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:25.398 12:22:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.398 12:22:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:25.398 12:22:18 -- nvmf/common.sh@717 -- # local ip 00:25:25.398 12:22:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:25.398 12:22:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:25.398 12:22:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.398 12:22:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.398 12:22:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:25.398 12:22:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.398 12:22:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:25.398 12:22:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:25.398 12:22:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:25.398 12:22:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:25.398 12:22:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.398 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:26.334 nvme0n1 00:25:26.334 12:22:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.334 12:22:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.334 12:22:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.334 12:22:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:26.334 12:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:26.334 12:22:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.334 12:22:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.334 12:22:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.334 12:22:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.334 12:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:26.334 12:22:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.334 12:22:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:26.334 12:22:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:26.334 12:22:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:26.334 12:22:19 -- host/auth.sh@44 -- # digest=sha384 00:25:26.334 12:22:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.334 12:22:19 -- host/auth.sh@44 -- # keyid=2 00:25:26.334 12:22:19 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:26.334 12:22:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:26.334 12:22:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:26.334 12:22:19 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:26.334 12:22:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:25:26.334 12:22:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:26.334 12:22:19 -- host/auth.sh@68 -- # digest=sha384 00:25:26.334 12:22:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:26.334 12:22:19 -- host/auth.sh@68 -- # keyid=2 00:25:26.334 12:22:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.334 12:22:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.334 12:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:26.334 12:22:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.334 12:22:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:26.334 12:22:19 -- nvmf/common.sh@717 -- # local ip 00:25:26.334 12:22:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:26.334 12:22:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:26.334 12:22:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.334 12:22:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.334 12:22:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:26.334 12:22:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.334 12:22:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:26.334 12:22:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:26.334 12:22:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:26.334 12:22:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:26.334 12:22:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.334 12:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:26.910 nvme0n1 00:25:26.910 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.910 12:22:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.910 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.910 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.910 12:22:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:26.910 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.910 12:22:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.910 12:22:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.910 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.910 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.910 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.910 12:22:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:26.910 12:22:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:26.910 12:22:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:26.910 12:22:20 -- host/auth.sh@44 -- # digest=sha384 00:25:26.910 12:22:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.910 12:22:20 -- host/auth.sh@44 -- # keyid=3 00:25:26.910 12:22:20 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:26.910 12:22:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:26.910 12:22:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:26.910 12:22:20 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:26.910 12:22:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:25:26.910 12:22:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:26.910 12:22:20 -- host/auth.sh@68 -- # digest=sha384 00:25:26.910 12:22:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:26.910 12:22:20 -- host/auth.sh@68 -- # keyid=3 00:25:26.910 12:22:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.910 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.910 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.910 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.910 12:22:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:26.910 12:22:20 -- nvmf/common.sh@717 -- # local ip 00:25:26.910 12:22:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:26.910 12:22:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:26.910 12:22:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.910 12:22:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.910 12:22:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:26.910 12:22:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.910 12:22:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:26.910 12:22:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:26.910 12:22:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:26.910 12:22:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:26.910 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.910 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:27.484 nvme0n1 00:25:27.484 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.484 12:22:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.484 12:22:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:27.484 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.484 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:27.484 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.484 12:22:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.484 12:22:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.484 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.484 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:27.484 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.484 12:22:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:27.484 12:22:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:27.484 12:22:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:27.484 12:22:20 -- host/auth.sh@44 -- # digest=sha384 00:25:27.484 12:22:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.484 12:22:20 -- host/auth.sh@44 -- # keyid=4 00:25:27.484 12:22:20 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:27.484 12:22:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:27.484 12:22:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:27.484 12:22:20 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:27.484 12:22:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:25:27.485 12:22:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:27.485 12:22:20 -- host/auth.sh@68 -- # digest=sha384 00:25:27.485 12:22:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:27.485 12:22:20 -- host/auth.sh@68 -- # keyid=4 00:25:27.485 12:22:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.485 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.485 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:27.485 12:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.485 12:22:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:27.485 12:22:20 -- nvmf/common.sh@717 -- # local ip 00:25:27.485 12:22:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:27.485 12:22:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:27.485 12:22:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.485 12:22:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.485 12:22:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:27.485 12:22:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.485 12:22:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:27.485 12:22:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:27.485 12:22:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:27.485 12:22:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.485 12:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.485 12:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 nvme0n1 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:28.420 12:22:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.420 12:22:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:28.420 12:22:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:28.420 12:22:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:28.420 12:22:21 -- host/auth.sh@44 -- # digest=sha512 00:25:28.420 12:22:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.420 12:22:21 -- host/auth.sh@44 -- # keyid=0 00:25:28.420 12:22:21 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:28.420 12:22:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:28.420 12:22:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:28.420 12:22:21 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:28.420 12:22:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:25:28.420 12:22:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:28.420 12:22:21 -- host/auth.sh@68 -- # digest=sha512 00:25:28.420 12:22:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:28.420 12:22:21 -- host/auth.sh@68 -- # keyid=0 00:25:28.420 12:22:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:28.420 12:22:21 -- nvmf/common.sh@717 -- # local ip 00:25:28.420 12:22:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:28.420 12:22:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:28.420 12:22:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.420 12:22:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.420 12:22:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:28.420 12:22:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.420 12:22:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:28.420 12:22:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:28.420 12:22:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:28.420 12:22:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 nvme0n1 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 12:22:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:28.420 12:22:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:28.420 12:22:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:28.420 12:22:21 -- host/auth.sh@44 -- # digest=sha512 00:25:28.420 12:22:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.420 12:22:21 -- host/auth.sh@44 -- # keyid=1 00:25:28.420 12:22:21 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:28.420 12:22:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:28.420 12:22:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:28.420 12:22:21 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:28.420 12:22:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:25:28.420 12:22:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:28.420 12:22:21 -- host/auth.sh@68 -- # digest=sha512 00:25:28.420 12:22:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:28.420 12:22:21 -- host/auth.sh@68 -- # keyid=1 00:25:28.420 12:22:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.420 12:22:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:28.420 12:22:21 -- nvmf/common.sh@717 -- # local ip 00:25:28.420 12:22:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:28.420 12:22:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:28.420 12:22:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.420 12:22:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.420 12:22:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:28.420 12:22:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.420 12:22:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:28.420 12:22:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:28.420 12:22:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:28.420 12:22:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:28.420 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.420 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 nvme0n1 00:25:28.679 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.679 12:22:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.679 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.679 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 12:22:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:28.679 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.679 12:22:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.679 12:22:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.679 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.679 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.679 12:22:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:28.679 12:22:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:28.679 12:22:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:28.679 12:22:21 -- host/auth.sh@44 -- # digest=sha512 00:25:28.679 12:22:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.679 12:22:21 -- host/auth.sh@44 -- # keyid=2 00:25:28.679 12:22:21 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:28.679 12:22:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:28.679 12:22:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:28.679 12:22:21 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:28.679 12:22:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:25:28.679 12:22:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:28.679 12:22:21 -- host/auth.sh@68 -- # digest=sha512 00:25:28.679 12:22:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:28.679 12:22:21 -- host/auth.sh@68 -- # keyid=2 00:25:28.679 12:22:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.679 12:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.679 12:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 12:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.679 12:22:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:28.679 12:22:21 -- nvmf/common.sh@717 -- # local ip 00:25:28.679 12:22:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:28.679 12:22:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:28.679 12:22:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.679 12:22:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.679 12:22:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:28.679 12:22:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.679 12:22:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:28.679 12:22:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:28.679 12:22:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:28.679 12:22:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:28.679 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.679 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 nvme0n1 00:25:28.679 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.679 12:22:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.679 12:22:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:28.679 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.679 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.938 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:28.938 12:22:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:28.938 12:22:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:28.938 12:22:22 -- host/auth.sh@44 -- # digest=sha512 00:25:28.938 12:22:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.938 12:22:22 -- host/auth.sh@44 -- # keyid=3 00:25:28.938 12:22:22 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:28.938 12:22:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:28.938 12:22:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:28.938 12:22:22 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:28.938 12:22:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:25:28.938 12:22:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:28.938 12:22:22 -- host/auth.sh@68 -- # digest=sha512 00:25:28.938 12:22:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:28.938 12:22:22 -- host/auth.sh@68 -- # keyid=3 00:25:28.938 12:22:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.938 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:28.938 12:22:22 -- nvmf/common.sh@717 -- # local ip 00:25:28.938 12:22:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:28.938 12:22:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:28.938 12:22:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.938 12:22:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.938 12:22:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:28.938 12:22:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.938 12:22:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:28.938 12:22:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:28.938 12:22:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:28.938 12:22:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.938 nvme0n1 00:25:28.938 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.938 12:22:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.938 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.938 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:28.938 12:22:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:28.938 12:22:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:28.938 12:22:22 -- host/auth.sh@44 -- # digest=sha512 00:25:28.938 12:22:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.938 12:22:22 -- host/auth.sh@44 -- # keyid=4 00:25:28.938 12:22:22 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:28.938 12:22:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:28.938 12:22:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:28.938 12:22:22 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:28.938 12:22:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:25:28.938 12:22:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:28.938 12:22:22 -- host/auth.sh@68 -- # digest=sha512 00:25:28.938 12:22:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:28.938 12:22:22 -- host/auth.sh@68 -- # keyid=4 00:25:28.938 12:22:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:28.938 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.938 12:22:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:28.938 12:22:22 -- nvmf/common.sh@717 -- # local ip 00:25:28.938 12:22:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:28.938 12:22:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:28.938 12:22:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.938 12:22:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.938 12:22:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:28.938 12:22:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.938 12:22:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:28.938 12:22:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:28.938 12:22:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:28.938 12:22:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.938 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.938 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.196 nvme0n1 00:25:29.196 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.196 12:22:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.196 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.196 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.196 12:22:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:29.196 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.196 12:22:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.196 12:22:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.196 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.196 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.196 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.196 12:22:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.196 12:22:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:29.196 12:22:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:29.196 12:22:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:29.196 12:22:22 -- host/auth.sh@44 -- # digest=sha512 00:25:29.196 12:22:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.196 12:22:22 -- host/auth.sh@44 -- # keyid=0 00:25:29.196 12:22:22 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:29.196 12:22:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:29.196 12:22:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:29.196 12:22:22 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:29.196 12:22:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:25:29.196 12:22:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.196 12:22:22 -- host/auth.sh@68 -- # digest=sha512 00:25:29.196 12:22:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:29.196 12:22:22 -- host/auth.sh@68 -- # keyid=0 00:25:29.197 12:22:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.197 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.197 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.197 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.197 12:22:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.197 12:22:22 -- nvmf/common.sh@717 -- # local ip 00:25:29.197 12:22:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.197 12:22:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.197 12:22:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.197 12:22:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.197 12:22:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.197 12:22:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.197 12:22:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.197 12:22:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.197 12:22:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.197 12:22:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:29.197 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.197 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.455 nvme0n1 00:25:29.455 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.455 12:22:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.455 12:22:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:29.455 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.455 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.455 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.455 12:22:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.455 12:22:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.455 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.455 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.455 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.455 12:22:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:29.455 12:22:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:29.455 12:22:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:29.455 12:22:22 -- host/auth.sh@44 -- # digest=sha512 00:25:29.455 12:22:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.455 12:22:22 -- host/auth.sh@44 -- # keyid=1 00:25:29.455 12:22:22 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:29.455 12:22:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:29.455 12:22:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:29.455 12:22:22 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:29.455 12:22:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:25:29.455 12:22:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.455 12:22:22 -- host/auth.sh@68 -- # digest=sha512 00:25:29.455 12:22:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:29.455 12:22:22 -- host/auth.sh@68 -- # keyid=1 00:25:29.455 12:22:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.455 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.455 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.455 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.455 12:22:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.455 12:22:22 -- nvmf/common.sh@717 -- # local ip 00:25:29.455 12:22:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.455 12:22:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.455 12:22:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.455 12:22:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.455 12:22:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.455 12:22:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.455 12:22:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.455 12:22:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.455 12:22:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.455 12:22:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:29.455 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.455 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.455 nvme0n1 00:25:29.455 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.455 12:22:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.455 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.455 12:22:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:29.455 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.455 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.714 12:22:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.714 12:22:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.714 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.714 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.714 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.714 12:22:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:29.714 12:22:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:29.714 12:22:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:29.714 12:22:22 -- host/auth.sh@44 -- # digest=sha512 00:25:29.714 12:22:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.714 12:22:22 -- host/auth.sh@44 -- # keyid=2 00:25:29.714 12:22:22 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:29.714 12:22:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:29.714 12:22:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:29.714 12:22:22 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:29.714 12:22:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:25:29.714 12:22:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.714 12:22:22 -- host/auth.sh@68 -- # digest=sha512 00:25:29.714 12:22:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:29.714 12:22:22 -- host/auth.sh@68 -- # keyid=2 00:25:29.714 12:22:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.714 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.714 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.714 12:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.714 12:22:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.714 12:22:22 -- nvmf/common.sh@717 -- # local ip 00:25:29.714 12:22:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.714 12:22:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.714 12:22:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.714 12:22:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.714 12:22:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.714 12:22:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.714 12:22:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.714 12:22:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.714 12:22:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.714 12:22:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:29.714 12:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.714 12:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.714 nvme0n1 00:25:29.714 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.714 12:22:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.714 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.714 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.714 12:22:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:29.714 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.715 12:22:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.715 12:22:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.715 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.715 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.715 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.715 12:22:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:29.715 12:22:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:29.715 12:22:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:29.715 12:22:23 -- host/auth.sh@44 -- # digest=sha512 00:25:29.715 12:22:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.715 12:22:23 -- host/auth.sh@44 -- # keyid=3 00:25:29.715 12:22:23 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:29.715 12:22:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:29.715 12:22:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:29.715 12:22:23 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:29.715 12:22:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:25:29.715 12:22:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.715 12:22:23 -- host/auth.sh@68 -- # digest=sha512 00:25:29.715 12:22:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:29.715 12:22:23 -- host/auth.sh@68 -- # keyid=3 00:25:29.715 12:22:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.715 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.715 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.973 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.973 12:22:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.973 12:22:23 -- nvmf/common.sh@717 -- # local ip 00:25:29.973 12:22:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.973 12:22:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.973 12:22:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.973 12:22:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.973 12:22:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.973 12:22:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.973 12:22:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.973 12:22:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.973 12:22:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.973 12:22:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:29.973 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.973 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.973 nvme0n1 00:25:29.973 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.973 12:22:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.973 12:22:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:29.973 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.973 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.973 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.973 12:22:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.973 12:22:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.973 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.973 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.973 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.973 12:22:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:29.973 12:22:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:29.973 12:22:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:29.973 12:22:23 -- host/auth.sh@44 -- # digest=sha512 00:25:29.973 12:22:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.973 12:22:23 -- host/auth.sh@44 -- # keyid=4 00:25:29.973 12:22:23 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:29.973 12:22:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:29.973 12:22:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:29.973 12:22:23 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:29.973 12:22:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:25:29.973 12:22:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.973 12:22:23 -- host/auth.sh@68 -- # digest=sha512 00:25:29.973 12:22:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:29.973 12:22:23 -- host/auth.sh@68 -- # keyid=4 00:25:29.973 12:22:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.973 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.973 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:29.973 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.973 12:22:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.973 12:22:23 -- nvmf/common.sh@717 -- # local ip 00:25:29.973 12:22:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.973 12:22:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.973 12:22:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.973 12:22:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.973 12:22:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.974 12:22:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.974 12:22:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.974 12:22:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.974 12:22:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.974 12:22:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.974 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.974 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.231 nvme0n1 00:25:30.231 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.231 12:22:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:30.231 12:22:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.231 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.231 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.231 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.231 12:22:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.231 12:22:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.231 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.231 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.231 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.231 12:22:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.231 12:22:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:30.231 12:22:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:30.231 12:22:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:30.231 12:22:23 -- host/auth.sh@44 -- # digest=sha512 00:25:30.231 12:22:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.231 12:22:23 -- host/auth.sh@44 -- # keyid=0 00:25:30.231 12:22:23 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:30.231 12:22:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:30.231 12:22:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:30.231 12:22:23 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:30.231 12:22:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:25:30.231 12:22:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:30.231 12:22:23 -- host/auth.sh@68 -- # digest=sha512 00:25:30.231 12:22:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:30.231 12:22:23 -- host/auth.sh@68 -- # keyid=0 00:25:30.231 12:22:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.231 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.231 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.231 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.231 12:22:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:30.231 12:22:23 -- nvmf/common.sh@717 -- # local ip 00:25:30.231 12:22:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:30.231 12:22:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:30.231 12:22:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.231 12:22:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.231 12:22:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:30.231 12:22:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.231 12:22:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:30.231 12:22:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:30.231 12:22:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:30.231 12:22:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:30.231 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.231 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.489 nvme0n1 00:25:30.489 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.489 12:22:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.489 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.489 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.489 12:22:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:30.489 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.489 12:22:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.489 12:22:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.489 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.489 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.489 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.489 12:22:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:30.489 12:22:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:30.489 12:22:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:30.489 12:22:23 -- host/auth.sh@44 -- # digest=sha512 00:25:30.489 12:22:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.489 12:22:23 -- host/auth.sh@44 -- # keyid=1 00:25:30.489 12:22:23 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:30.489 12:22:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:30.489 12:22:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:30.489 12:22:23 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:30.489 12:22:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:25:30.489 12:22:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:30.489 12:22:23 -- host/auth.sh@68 -- # digest=sha512 00:25:30.489 12:22:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:30.489 12:22:23 -- host/auth.sh@68 -- # keyid=1 00:25:30.489 12:22:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.489 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.489 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.489 12:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.489 12:22:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:30.489 12:22:23 -- nvmf/common.sh@717 -- # local ip 00:25:30.489 12:22:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:30.489 12:22:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:30.489 12:22:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.489 12:22:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.489 12:22:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:30.489 12:22:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.489 12:22:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:30.489 12:22:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:30.489 12:22:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:30.489 12:22:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:30.489 12:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.489 12:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.748 nvme0n1 00:25:30.748 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.748 12:22:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.748 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.748 12:22:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:30.748 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:30.748 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.748 12:22:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.748 12:22:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.748 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.748 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:30.748 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.748 12:22:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:30.748 12:22:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:30.748 12:22:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:30.748 12:22:24 -- host/auth.sh@44 -- # digest=sha512 00:25:30.748 12:22:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.748 12:22:24 -- host/auth.sh@44 -- # keyid=2 00:25:30.748 12:22:24 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:30.748 12:22:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:30.748 12:22:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:30.748 12:22:24 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:30.748 12:22:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:25:30.748 12:22:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:30.748 12:22:24 -- host/auth.sh@68 -- # digest=sha512 00:25:30.748 12:22:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:30.748 12:22:24 -- host/auth.sh@68 -- # keyid=2 00:25:30.748 12:22:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.748 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.748 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:30.748 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.748 12:22:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:30.748 12:22:24 -- nvmf/common.sh@717 -- # local ip 00:25:30.748 12:22:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:30.748 12:22:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:30.748 12:22:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.748 12:22:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.748 12:22:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:30.748 12:22:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.748 12:22:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:30.748 12:22:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:30.748 12:22:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:30.748 12:22:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:30.748 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.748 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.006 nvme0n1 00:25:31.006 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.006 12:22:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.006 12:22:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:31.006 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.006 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.006 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.006 12:22:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.006 12:22:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.006 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.006 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.006 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.007 12:22:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:31.007 12:22:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:31.007 12:22:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:31.007 12:22:24 -- host/auth.sh@44 -- # digest=sha512 00:25:31.007 12:22:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.007 12:22:24 -- host/auth.sh@44 -- # keyid=3 00:25:31.007 12:22:24 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:31.007 12:22:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:31.007 12:22:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:31.007 12:22:24 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:31.007 12:22:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:25:31.007 12:22:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.007 12:22:24 -- host/auth.sh@68 -- # digest=sha512 00:25:31.007 12:22:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:31.007 12:22:24 -- host/auth.sh@68 -- # keyid=3 00:25:31.007 12:22:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.007 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.007 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.007 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.007 12:22:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.007 12:22:24 -- nvmf/common.sh@717 -- # local ip 00:25:31.007 12:22:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.007 12:22:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.007 12:22:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.007 12:22:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.007 12:22:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.007 12:22:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.007 12:22:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.007 12:22:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.007 12:22:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.007 12:22:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:31.007 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.007 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.276 nvme0n1 00:25:31.276 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.276 12:22:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.276 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.276 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.276 12:22:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:31.276 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.276 12:22:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.276 12:22:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.276 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.276 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.276 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.276 12:22:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:31.276 12:22:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:31.276 12:22:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:31.276 12:22:24 -- host/auth.sh@44 -- # digest=sha512 00:25:31.276 12:22:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.276 12:22:24 -- host/auth.sh@44 -- # keyid=4 00:25:31.277 12:22:24 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:31.277 12:22:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:31.277 12:22:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:31.277 12:22:24 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:31.277 12:22:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:25:31.277 12:22:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.277 12:22:24 -- host/auth.sh@68 -- # digest=sha512 00:25:31.277 12:22:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:31.277 12:22:24 -- host/auth.sh@68 -- # keyid=4 00:25:31.277 12:22:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.277 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.277 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.277 12:22:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.277 12:22:24 -- nvmf/common.sh@717 -- # local ip 00:25:31.277 12:22:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.277 12:22:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.277 12:22:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.277 12:22:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.277 12:22:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.277 12:22:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.277 12:22:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.277 12:22:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.277 12:22:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.277 12:22:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.277 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.277 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.549 nvme0n1 00:25:31.549 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.549 12:22:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.549 12:22:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:31.549 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.549 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.549 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.549 12:22:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.549 12:22:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.549 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.549 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.549 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.549 12:22:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.549 12:22:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:31.549 12:22:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:31.549 12:22:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:31.549 12:22:24 -- host/auth.sh@44 -- # digest=sha512 00:25:31.549 12:22:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.549 12:22:24 -- host/auth.sh@44 -- # keyid=0 00:25:31.549 12:22:24 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:31.549 12:22:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:31.549 12:22:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:31.549 12:22:24 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:31.549 12:22:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:25:31.549 12:22:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.549 12:22:24 -- host/auth.sh@68 -- # digest=sha512 00:25:31.549 12:22:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:31.549 12:22:24 -- host/auth.sh@68 -- # keyid=0 00:25:31.549 12:22:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:31.549 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.549 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:31.549 12:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.549 12:22:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.549 12:22:24 -- nvmf/common.sh@717 -- # local ip 00:25:31.549 12:22:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.549 12:22:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.549 12:22:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.550 12:22:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.550 12:22:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.550 12:22:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.550 12:22:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.550 12:22:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.550 12:22:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.550 12:22:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:31.550 12:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.550 12:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:32.117 nvme0n1 00:25:32.117 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.117 12:22:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.117 12:22:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.117 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.117 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.117 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.117 12:22:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.117 12:22:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.117 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.117 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.117 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.117 12:22:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.117 12:22:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:32.117 12:22:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.117 12:22:25 -- host/auth.sh@44 -- # digest=sha512 00:25:32.117 12:22:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.117 12:22:25 -- host/auth.sh@44 -- # keyid=1 00:25:32.117 12:22:25 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:32.117 12:22:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:32.117 12:22:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:32.117 12:22:25 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:32.117 12:22:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:25:32.117 12:22:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.117 12:22:25 -- host/auth.sh@68 -- # digest=sha512 00:25:32.117 12:22:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:32.117 12:22:25 -- host/auth.sh@68 -- # keyid=1 00:25:32.117 12:22:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.117 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.117 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.117 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.117 12:22:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.117 12:22:25 -- nvmf/common.sh@717 -- # local ip 00:25:32.117 12:22:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.117 12:22:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.117 12:22:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.117 12:22:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.117 12:22:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.117 12:22:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.117 12:22:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.117 12:22:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.117 12:22:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.117 12:22:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:32.117 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.117 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.376 nvme0n1 00:25:32.376 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.376 12:22:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.376 12:22:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.376 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.376 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.376 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.634 12:22:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.634 12:22:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.634 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.634 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.634 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.634 12:22:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.635 12:22:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:32.635 12:22:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.635 12:22:25 -- host/auth.sh@44 -- # digest=sha512 00:25:32.635 12:22:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.635 12:22:25 -- host/auth.sh@44 -- # keyid=2 00:25:32.635 12:22:25 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:32.635 12:22:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:32.635 12:22:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:32.635 12:22:25 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:32.635 12:22:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:25:32.635 12:22:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.635 12:22:25 -- host/auth.sh@68 -- # digest=sha512 00:25:32.635 12:22:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:32.635 12:22:25 -- host/auth.sh@68 -- # keyid=2 00:25:32.635 12:22:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.635 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.635 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.635 12:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.635 12:22:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.635 12:22:25 -- nvmf/common.sh@717 -- # local ip 00:25:32.635 12:22:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.635 12:22:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.635 12:22:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.635 12:22:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.635 12:22:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.635 12:22:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.635 12:22:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.635 12:22:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.635 12:22:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.635 12:22:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.635 12:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.635 12:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:32.894 nvme0n1 00:25:32.894 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.894 12:22:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.894 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.894 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.894 12:22:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.894 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.894 12:22:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.894 12:22:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.894 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.894 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.894 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.894 12:22:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.894 12:22:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:32.894 12:22:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.894 12:22:26 -- host/auth.sh@44 -- # digest=sha512 00:25:32.894 12:22:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.894 12:22:26 -- host/auth.sh@44 -- # keyid=3 00:25:32.894 12:22:26 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:32.894 12:22:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:32.894 12:22:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:32.894 12:22:26 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:32.894 12:22:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:25:32.894 12:22:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.894 12:22:26 -- host/auth.sh@68 -- # digest=sha512 00:25:32.894 12:22:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:32.894 12:22:26 -- host/auth.sh@68 -- # keyid=3 00:25:32.894 12:22:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.894 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.894 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.894 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.894 12:22:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.894 12:22:26 -- nvmf/common.sh@717 -- # local ip 00:25:32.894 12:22:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.894 12:22:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.894 12:22:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.894 12:22:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.894 12:22:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.894 12:22:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.894 12:22:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.894 12:22:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.894 12:22:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.894 12:22:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:32.894 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.894 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:33.462 nvme0n1 00:25:33.462 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.462 12:22:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.462 12:22:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.462 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.462 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:33.462 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.462 12:22:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.462 12:22:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.462 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.462 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:33.462 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.462 12:22:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.462 12:22:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:33.462 12:22:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.462 12:22:26 -- host/auth.sh@44 -- # digest=sha512 00:25:33.462 12:22:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.462 12:22:26 -- host/auth.sh@44 -- # keyid=4 00:25:33.462 12:22:26 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:33.462 12:22:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:33.462 12:22:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:33.462 12:22:26 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:33.462 12:22:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:25:33.462 12:22:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.462 12:22:26 -- host/auth.sh@68 -- # digest=sha512 00:25:33.462 12:22:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:33.462 12:22:26 -- host/auth.sh@68 -- # keyid=4 00:25:33.462 12:22:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.462 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.462 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:33.462 12:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.462 12:22:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.462 12:22:26 -- nvmf/common.sh@717 -- # local ip 00:25:33.462 12:22:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.462 12:22:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.462 12:22:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.462 12:22:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.462 12:22:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.462 12:22:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.462 12:22:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.462 12:22:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.462 12:22:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.462 12:22:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.462 12:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.462 12:22:26 -- common/autotest_common.sh@10 -- # set +x 00:25:33.721 nvme0n1 00:25:33.721 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.721 12:22:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.721 12:22:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.721 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.721 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:33.721 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.980 12:22:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.980 12:22:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.980 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.980 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:33.980 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.980 12:22:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.980 12:22:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.980 12:22:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:33.980 12:22:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.980 12:22:27 -- host/auth.sh@44 -- # digest=sha512 00:25:33.980 12:22:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.980 12:22:27 -- host/auth.sh@44 -- # keyid=0 00:25:33.980 12:22:27 -- host/auth.sh@45 -- # key=DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:33.980 12:22:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:33.980 12:22:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:33.980 12:22:27 -- host/auth.sh@49 -- # echo DHHC-1:00:YzlkMWM2NDM5YzFjYWU5OWY3MGU1MzEzMDI4NDlmYznf0Ele: 00:25:33.980 12:22:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:25:33.980 12:22:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.980 12:22:27 -- host/auth.sh@68 -- # digest=sha512 00:25:33.980 12:22:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:33.980 12:22:27 -- host/auth.sh@68 -- # keyid=0 00:25:33.980 12:22:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:33.980 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.980 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:33.980 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.980 12:22:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.980 12:22:27 -- nvmf/common.sh@717 -- # local ip 00:25:33.980 12:22:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.980 12:22:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.980 12:22:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.980 12:22:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.980 12:22:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.980 12:22:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.980 12:22:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.980 12:22:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.980 12:22:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.980 12:22:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:33.980 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.980 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.547 nvme0n1 00:25:34.547 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.547 12:22:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.547 12:22:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.547 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.547 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.547 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.547 12:22:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.547 12:22:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.547 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.547 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.547 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.547 12:22:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.547 12:22:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:34.547 12:22:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.547 12:22:27 -- host/auth.sh@44 -- # digest=sha512 00:25:34.547 12:22:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.547 12:22:27 -- host/auth.sh@44 -- # keyid=1 00:25:34.547 12:22:27 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:34.547 12:22:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:34.547 12:22:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:34.547 12:22:27 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:34.547 12:22:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:25:34.547 12:22:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.547 12:22:27 -- host/auth.sh@68 -- # digest=sha512 00:25:34.547 12:22:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:34.547 12:22:27 -- host/auth.sh@68 -- # keyid=1 00:25:34.547 12:22:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:34.547 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.547 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.547 12:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.547 12:22:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.547 12:22:27 -- nvmf/common.sh@717 -- # local ip 00:25:34.547 12:22:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.547 12:22:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.548 12:22:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.548 12:22:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.548 12:22:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.548 12:22:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.548 12:22:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.548 12:22:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.548 12:22:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.548 12:22:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:34.548 12:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.548 12:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:35.115 nvme0n1 00:25:35.115 12:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.115 12:22:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.115 12:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.115 12:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.115 12:22:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.373 12:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.373 12:22:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.373 12:22:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.373 12:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.373 12:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.373 12:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.373 12:22:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.373 12:22:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:35.373 12:22:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.373 12:22:28 -- host/auth.sh@44 -- # digest=sha512 00:25:35.373 12:22:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.373 12:22:28 -- host/auth.sh@44 -- # keyid=2 00:25:35.373 12:22:28 -- host/auth.sh@45 -- # key=DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:35.373 12:22:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:35.373 12:22:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:35.373 12:22:28 -- host/auth.sh@49 -- # echo DHHC-1:01:YjNhYWM2NjJjMjdjN2MwMDE5MzI2NDNmZmQyYzg3MDfyJyFL: 00:25:35.374 12:22:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:25:35.374 12:22:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.374 12:22:28 -- host/auth.sh@68 -- # digest=sha512 00:25:35.374 12:22:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:35.374 12:22:28 -- host/auth.sh@68 -- # keyid=2 00:25:35.374 12:22:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.374 12:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.374 12:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.374 12:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.374 12:22:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.374 12:22:28 -- nvmf/common.sh@717 -- # local ip 00:25:35.374 12:22:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.374 12:22:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.374 12:22:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.374 12:22:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.374 12:22:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.374 12:22:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.374 12:22:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.374 12:22:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.374 12:22:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.374 12:22:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:35.374 12:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.374 12:22:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.963 nvme0n1 00:25:35.963 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.963 12:22:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.963 12:22:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.963 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.963 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:35.963 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.963 12:22:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.963 12:22:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.963 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.963 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:35.963 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.963 12:22:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.963 12:22:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:35.963 12:22:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.963 12:22:29 -- host/auth.sh@44 -- # digest=sha512 00:25:35.963 12:22:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.963 12:22:29 -- host/auth.sh@44 -- # keyid=3 00:25:35.963 12:22:29 -- host/auth.sh@45 -- # key=DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:35.963 12:22:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:35.963 12:22:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:35.963 12:22:29 -- host/auth.sh@49 -- # echo DHHC-1:02:YTVkNmEwYTY4NThjYzE0N2NiZDc1YmEzOGY4YTM5YTkyZDY2NzA2MTQ4N2UyZDgxRXay3A==: 00:25:35.963 12:22:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:25:35.963 12:22:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.963 12:22:29 -- host/auth.sh@68 -- # digest=sha512 00:25:35.963 12:22:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:35.963 12:22:29 -- host/auth.sh@68 -- # keyid=3 00:25:35.963 12:22:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.963 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.963 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:35.963 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.963 12:22:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.963 12:22:29 -- nvmf/common.sh@717 -- # local ip 00:25:35.963 12:22:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.963 12:22:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.963 12:22:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.963 12:22:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.963 12:22:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.963 12:22:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.963 12:22:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.963 12:22:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.963 12:22:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.963 12:22:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:35.963 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.963 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.532 nvme0n1 00:25:36.532 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.532 12:22:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.532 12:22:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.532 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.532 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.532 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.532 12:22:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.532 12:22:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.532 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.532 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.532 12:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.532 12:22:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.532 12:22:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:36.532 12:22:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.532 12:22:29 -- host/auth.sh@44 -- # digest=sha512 00:25:36.532 12:22:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.532 12:22:29 -- host/auth.sh@44 -- # keyid=4 00:25:36.532 12:22:29 -- host/auth.sh@45 -- # key=DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:36.532 12:22:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:36.532 12:22:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:36.532 12:22:29 -- host/auth.sh@49 -- # echo DHHC-1:03:ZGY2ZWQ1OTU0NWIwMTUxODI1MmM5NzAyNTQwZjgxZGEzYTQwNzBiZmRhYTc2NTMzNTg5MWJlYjMyMDI4MDU2OJTarf8=: 00:25:36.532 12:22:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:25:36.532 12:22:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.532 12:22:29 -- host/auth.sh@68 -- # digest=sha512 00:25:36.532 12:22:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:36.532 12:22:29 -- host/auth.sh@68 -- # keyid=4 00:25:36.532 12:22:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.532 12:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.532 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.790 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.790 12:22:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.790 12:22:30 -- nvmf/common.sh@717 -- # local ip 00:25:36.790 12:22:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.790 12:22:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.790 12:22:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.790 12:22:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.790 12:22:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.790 12:22:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.790 12:22:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.790 12:22:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.790 12:22:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.790 12:22:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.790 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.790 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.358 nvme0n1 00:25:37.358 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.358 12:22:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.358 12:22:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:37.358 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.358 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.358 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.358 12:22:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.358 12:22:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.358 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.358 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.358 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.358 12:22:30 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:37.358 12:22:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:37.358 12:22:30 -- host/auth.sh@44 -- # digest=sha256 00:25:37.358 12:22:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.358 12:22:30 -- host/auth.sh@44 -- # keyid=1 00:25:37.358 12:22:30 -- host/auth.sh@45 -- # key=DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:37.358 12:22:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:37.358 12:22:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:37.358 12:22:30 -- host/auth.sh@49 -- # echo DHHC-1:00:N2I0MmRlYjczODJlMmU1ZTFkZGVmOTllNWMxMmEyOTUwZWM2MjNiM2E5NGYzMmEwwS3rAQ==: 00:25:37.358 12:22:30 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:37.358 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.358 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.358 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.358 12:22:30 -- host/auth.sh@119 -- # get_main_ns_ip 00:25:37.358 12:22:30 -- nvmf/common.sh@717 -- # local ip 00:25:37.358 12:22:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.358 12:22:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.358 12:22:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.358 12:22:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.358 12:22:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.358 12:22:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.358 12:22:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.358 12:22:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.358 12:22:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.358 12:22:30 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:37.358 12:22:30 -- common/autotest_common.sh@638 -- # local es=0 00:25:37.358 12:22:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:37.358 12:22:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:37.358 12:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:37.358 12:22:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:37.358 12:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:37.358 12:22:30 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:37.358 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.358 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.358 request: 00:25:37.358 { 00:25:37.358 "name": "nvme0", 00:25:37.358 "trtype": "tcp", 00:25:37.358 "traddr": "10.0.0.1", 00:25:37.358 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:37.358 "adrfam": "ipv4", 00:25:37.358 "trsvcid": "4420", 00:25:37.358 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:37.358 "method": "bdev_nvme_attach_controller", 00:25:37.358 "req_id": 1 00:25:37.358 } 00:25:37.358 Got JSON-RPC error response 00:25:37.358 response: 00:25:37.358 { 00:25:37.358 "code": -32602, 00:25:37.358 "message": "Invalid parameters" 00:25:37.358 } 00:25:37.358 12:22:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:37.358 12:22:30 -- common/autotest_common.sh@641 -- # es=1 00:25:37.358 12:22:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:37.358 12:22:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:37.358 12:22:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:37.358 12:22:30 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.358 12:22:30 -- host/auth.sh@121 -- # jq length 00:25:37.358 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.358 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.358 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.358 12:22:30 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:25:37.358 12:22:30 -- host/auth.sh@124 -- # get_main_ns_ip 00:25:37.358 12:22:30 -- nvmf/common.sh@717 -- # local ip 00:25:37.358 12:22:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.358 12:22:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.358 12:22:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.358 12:22:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.358 12:22:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.358 12:22:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.358 12:22:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.358 12:22:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.358 12:22:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.358 12:22:30 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:37.358 12:22:30 -- common/autotest_common.sh@638 -- # local es=0 00:25:37.359 12:22:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:37.359 12:22:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:37.359 12:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:37.359 12:22:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:37.359 12:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:37.359 12:22:30 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:37.359 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.359 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.359 request: 00:25:37.359 { 00:25:37.359 "name": "nvme0", 00:25:37.359 "trtype": "tcp", 00:25:37.359 "traddr": "10.0.0.1", 00:25:37.359 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:37.359 "adrfam": "ipv4", 00:25:37.359 "trsvcid": "4420", 00:25:37.359 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:37.359 "dhchap_key": "key2", 00:25:37.359 "method": "bdev_nvme_attach_controller", 00:25:37.359 "req_id": 1 00:25:37.359 } 00:25:37.359 Got JSON-RPC error response 00:25:37.359 response: 00:25:37.359 { 00:25:37.359 "code": -32602, 00:25:37.359 "message": "Invalid parameters" 00:25:37.359 } 00:25:37.359 12:22:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:37.359 12:22:30 -- common/autotest_common.sh@641 -- # es=1 00:25:37.359 12:22:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:37.359 12:22:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:37.359 12:22:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:37.359 12:22:30 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.359 12:22:30 -- host/auth.sh@127 -- # jq length 00:25:37.359 12:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.359 12:22:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.359 12:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.618 12:22:30 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:25:37.618 12:22:30 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:37.618 12:22:30 -- host/auth.sh@130 -- # cleanup 00:25:37.618 12:22:30 -- host/auth.sh@24 -- # nvmftestfini 00:25:37.618 12:22:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:37.618 12:22:30 -- nvmf/common.sh@117 -- # sync 00:25:37.618 12:22:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.618 12:22:30 -- nvmf/common.sh@120 -- # set +e 00:25:37.618 12:22:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.618 12:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.618 rmmod nvme_tcp 00:25:37.618 rmmod nvme_fabrics 00:25:37.618 12:22:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.618 12:22:30 -- nvmf/common.sh@124 -- # set -e 00:25:37.618 12:22:30 -- nvmf/common.sh@125 -- # return 0 00:25:37.618 12:22:30 -- nvmf/common.sh@478 -- # '[' -n 74715 ']' 00:25:37.618 12:22:30 -- nvmf/common.sh@479 -- # killprocess 74715 00:25:37.618 12:22:30 -- common/autotest_common.sh@936 -- # '[' -z 74715 ']' 00:25:37.618 12:22:30 -- common/autotest_common.sh@940 -- # kill -0 74715 00:25:37.618 12:22:30 -- common/autotest_common.sh@941 -- # uname 00:25:37.618 12:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:37.618 12:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74715 00:25:37.618 killing process with pid 74715 00:25:37.618 12:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:37.618 12:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:37.618 12:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74715' 00:25:37.618 12:22:30 -- common/autotest_common.sh@955 -- # kill 74715 00:25:37.618 12:22:30 -- common/autotest_common.sh@960 -- # wait 74715 00:25:37.877 12:22:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:37.877 12:22:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:37.877 12:22:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:37.877 12:22:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.877 12:22:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.877 12:22:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.877 12:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.877 12:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.877 12:22:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:37.877 12:22:31 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:37.877 12:22:31 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:37.877 12:22:31 -- host/auth.sh@27 -- # clean_kernel_target 00:25:37.877 12:22:31 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:37.877 12:22:31 -- nvmf/common.sh@675 -- # echo 0 00:25:37.877 12:22:31 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:37.877 12:22:31 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:37.877 12:22:31 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:37.877 12:22:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:37.877 12:22:31 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:37.877 12:22:31 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:37.877 12:22:31 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:38.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:38.812 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:38.812 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:38.812 12:22:32 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4rx /tmp/spdk.key-null.Izw /tmp/spdk.key-sha256.FpP /tmp/spdk.key-sha384.b4S /tmp/spdk.key-sha512.pax /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:25:38.812 12:22:32 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:39.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:39.329 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:39.329 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:39.329 00:25:39.329 real 0m38.731s 00:25:39.329 user 0m34.611s 00:25:39.329 sys 0m3.664s 00:25:39.329 12:22:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:39.329 ************************************ 00:25:39.329 END TEST nvmf_auth 00:25:39.329 ************************************ 00:25:39.329 12:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:39.329 12:22:32 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:25:39.329 12:22:32 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:39.329 12:22:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:39.329 12:22:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:39.329 12:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:39.329 ************************************ 00:25:39.329 START TEST nvmf_digest 00:25:39.329 ************************************ 00:25:39.329 12:22:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:39.588 * Looking for test storage... 00:25:39.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:39.588 12:22:32 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:39.588 12:22:32 -- nvmf/common.sh@7 -- # uname -s 00:25:39.588 12:22:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.588 12:22:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.588 12:22:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.588 12:22:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.588 12:22:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.588 12:22:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.588 12:22:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.588 12:22:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.588 12:22:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.588 12:22:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.588 12:22:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:25:39.588 12:22:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:25:39.588 12:22:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.588 12:22:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.588 12:22:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:39.588 12:22:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.588 12:22:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:39.588 12:22:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.588 12:22:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.588 12:22:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.588 12:22:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.588 12:22:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.588 12:22:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.588 12:22:32 -- paths/export.sh@5 -- # export PATH 00:25:39.588 12:22:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.588 12:22:32 -- nvmf/common.sh@47 -- # : 0 00:25:39.588 12:22:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:39.588 12:22:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:39.588 12:22:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.588 12:22:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.588 12:22:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.588 12:22:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:39.588 12:22:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:39.588 12:22:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:39.588 12:22:32 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:39.588 12:22:32 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:39.588 12:22:32 -- host/digest.sh@16 -- # runtime=2 00:25:39.588 12:22:32 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:39.588 12:22:32 -- host/digest.sh@138 -- # nvmftestinit 00:25:39.588 12:22:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:39.588 12:22:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.588 12:22:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:39.588 12:22:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:39.588 12:22:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:39.588 12:22:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.588 12:22:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.588 12:22:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.588 12:22:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:39.588 12:22:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:39.588 12:22:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:39.588 12:22:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:39.588 12:22:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:39.588 12:22:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:39.588 12:22:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.588 12:22:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.588 12:22:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:39.588 12:22:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:39.588 12:22:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:39.588 12:22:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:39.588 12:22:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:39.588 12:22:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.588 12:22:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:39.588 12:22:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:39.588 12:22:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:39.588 12:22:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:39.588 12:22:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:39.588 12:22:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:39.588 Cannot find device "nvmf_tgt_br" 00:25:39.588 12:22:32 -- nvmf/common.sh@155 -- # true 00:25:39.588 12:22:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.588 Cannot find device "nvmf_tgt_br2" 00:25:39.588 12:22:32 -- nvmf/common.sh@156 -- # true 00:25:39.588 12:22:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:39.588 12:22:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:39.588 Cannot find device "nvmf_tgt_br" 00:25:39.588 12:22:32 -- nvmf/common.sh@158 -- # true 00:25:39.588 12:22:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:39.588 Cannot find device "nvmf_tgt_br2" 00:25:39.588 12:22:32 -- nvmf/common.sh@159 -- # true 00:25:39.588 12:22:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:39.588 12:22:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:39.588 12:22:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.588 12:22:32 -- nvmf/common.sh@162 -- # true 00:25:39.588 12:22:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.588 12:22:32 -- nvmf/common.sh@163 -- # true 00:25:39.588 12:22:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:39.588 12:22:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:39.588 12:22:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:39.588 12:22:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:39.588 12:22:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:39.588 12:22:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:39.588 12:22:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:39.588 12:22:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:39.588 12:22:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:39.588 12:22:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:39.883 12:22:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:39.883 12:22:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:39.883 12:22:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:39.883 12:22:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:39.883 12:22:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:39.883 12:22:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:39.883 12:22:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:39.883 12:22:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:39.883 12:22:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:39.883 12:22:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:39.883 12:22:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:39.883 12:22:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:39.883 12:22:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:39.883 12:22:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:39.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:25:39.883 00:25:39.883 --- 10.0.0.2 ping statistics --- 00:25:39.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.884 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:39.884 12:22:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:39.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:39.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:25:39.884 00:25:39.884 --- 10.0.0.3 ping statistics --- 00:25:39.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.884 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:39.884 12:22:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:39.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:39.884 00:25:39.884 --- 10.0.0.1 ping statistics --- 00:25:39.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.884 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:39.884 12:22:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.884 12:22:33 -- nvmf/common.sh@422 -- # return 0 00:25:39.884 12:22:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:39.884 12:22:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.884 12:22:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:39.884 12:22:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:39.884 12:22:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.884 12:22:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:39.884 12:22:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:39.884 12:22:33 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:39.884 12:22:33 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:39.884 12:22:33 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:39.884 12:22:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:39.884 12:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:39.884 12:22:33 -- common/autotest_common.sh@10 -- # set +x 00:25:39.884 ************************************ 00:25:39.884 START TEST nvmf_digest_clean 00:25:39.884 ************************************ 00:25:39.884 12:22:33 -- common/autotest_common.sh@1111 -- # run_digest 00:25:39.884 12:22:33 -- host/digest.sh@120 -- # local dsa_initiator 00:25:39.884 12:22:33 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:39.884 12:22:33 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:39.884 12:22:33 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:39.884 12:22:33 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:39.884 12:22:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:39.884 12:22:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:39.884 12:22:33 -- common/autotest_common.sh@10 -- # set +x 00:25:39.884 12:22:33 -- nvmf/common.sh@470 -- # nvmfpid=76321 00:25:39.884 12:22:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:39.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.884 12:22:33 -- nvmf/common.sh@471 -- # waitforlisten 76321 00:25:39.884 12:22:33 -- common/autotest_common.sh@817 -- # '[' -z 76321 ']' 00:25:39.884 12:22:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.884 12:22:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:39.884 12:22:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.884 12:22:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:39.884 12:22:33 -- common/autotest_common.sh@10 -- # set +x 00:25:39.884 [2024-04-26 12:22:33.320052] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:25:39.884 [2024-04-26 12:22:33.320425] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.163 [2024-04-26 12:22:33.460946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.163 [2024-04-26 12:22:33.590407] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.163 [2024-04-26 12:22:33.590705] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.163 [2024-04-26 12:22:33.590877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.163 [2024-04-26 12:22:33.591019] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.163 [2024-04-26 12:22:33.591070] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.163 [2024-04-26 12:22:33.591219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.098 12:22:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:41.098 12:22:34 -- common/autotest_common.sh@850 -- # return 0 00:25:41.098 12:22:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:41.098 12:22:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:41.098 12:22:34 -- common/autotest_common.sh@10 -- # set +x 00:25:41.098 12:22:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.098 12:22:34 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:41.098 12:22:34 -- host/digest.sh@126 -- # common_target_config 00:25:41.098 12:22:34 -- host/digest.sh@43 -- # rpc_cmd 00:25:41.098 12:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.098 12:22:34 -- common/autotest_common.sh@10 -- # set +x 00:25:41.098 null0 00:25:41.098 [2024-04-26 12:22:34.487217] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.098 [2024-04-26 12:22:34.511349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.098 12:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.098 12:22:34 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:41.098 12:22:34 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:41.098 12:22:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:41.098 12:22:34 -- host/digest.sh@80 -- # rw=randread 00:25:41.098 12:22:34 -- host/digest.sh@80 -- # bs=4096 00:25:41.098 12:22:34 -- host/digest.sh@80 -- # qd=128 00:25:41.098 12:22:34 -- host/digest.sh@80 -- # scan_dsa=false 00:25:41.098 12:22:34 -- host/digest.sh@83 -- # bperfpid=76353 00:25:41.098 12:22:34 -- host/digest.sh@84 -- # waitforlisten 76353 /var/tmp/bperf.sock 00:25:41.098 12:22:34 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:41.098 12:22:34 -- common/autotest_common.sh@817 -- # '[' -z 76353 ']' 00:25:41.098 12:22:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.098 12:22:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:41.098 12:22:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.098 12:22:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:41.098 12:22:34 -- common/autotest_common.sh@10 -- # set +x 00:25:41.098 [2024-04-26 12:22:34.565821] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:25:41.098 [2024-04-26 12:22:34.566107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76353 ] 00:25:41.356 [2024-04-26 12:22:34.701443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.356 [2024-04-26 12:22:34.820038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.290 12:22:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:42.290 12:22:35 -- common/autotest_common.sh@850 -- # return 0 00:25:42.290 12:22:35 -- host/digest.sh@86 -- # false 00:25:42.290 12:22:35 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:42.290 12:22:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:42.548 12:22:35 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.548 12:22:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.114 nvme0n1 00:25:43.114 12:22:36 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:43.114 12:22:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.115 Running I/O for 2 seconds... 00:25:45.015 00:25:45.015 Latency(us) 00:25:45.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.015 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:45.015 nvme0n1 : 2.00 14823.55 57.90 0.00 0.00 8629.15 8221.79 18230.92 00:25:45.015 =================================================================================================================== 00:25:45.015 Total : 14823.55 57.90 0.00 0.00 8629.15 8221.79 18230.92 00:25:45.015 0 00:25:45.015 12:22:38 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:45.015 12:22:38 -- host/digest.sh@93 -- # get_accel_stats 00:25:45.015 12:22:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:45.015 12:22:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:45.015 12:22:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:45.015 | select(.opcode=="crc32c") 00:25:45.015 | "\(.module_name) \(.executed)"' 00:25:45.583 12:22:38 -- host/digest.sh@94 -- # false 00:25:45.583 12:22:38 -- host/digest.sh@94 -- # exp_module=software 00:25:45.583 12:22:38 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:45.583 12:22:38 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:45.583 12:22:38 -- host/digest.sh@98 -- # killprocess 76353 00:25:45.583 12:22:38 -- common/autotest_common.sh@936 -- # '[' -z 76353 ']' 00:25:45.583 12:22:38 -- common/autotest_common.sh@940 -- # kill -0 76353 00:25:45.583 12:22:38 -- common/autotest_common.sh@941 -- # uname 00:25:45.583 12:22:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:45.583 12:22:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76353 00:25:45.583 killing process with pid 76353 00:25:45.583 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.583 00:25:45.583 Latency(us) 00:25:45.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.583 =================================================================================================================== 00:25:45.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.583 12:22:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:45.583 12:22:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:45.583 12:22:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76353' 00:25:45.583 12:22:38 -- common/autotest_common.sh@955 -- # kill 76353 00:25:45.583 12:22:38 -- common/autotest_common.sh@960 -- # wait 76353 00:25:45.843 12:22:39 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:45.843 12:22:39 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:45.843 12:22:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:45.843 12:22:39 -- host/digest.sh@80 -- # rw=randread 00:25:45.843 12:22:39 -- host/digest.sh@80 -- # bs=131072 00:25:45.843 12:22:39 -- host/digest.sh@80 -- # qd=16 00:25:45.843 12:22:39 -- host/digest.sh@80 -- # scan_dsa=false 00:25:45.843 12:22:39 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:45.843 12:22:39 -- host/digest.sh@83 -- # bperfpid=76419 00:25:45.843 12:22:39 -- host/digest.sh@84 -- # waitforlisten 76419 /var/tmp/bperf.sock 00:25:45.843 12:22:39 -- common/autotest_common.sh@817 -- # '[' -z 76419 ']' 00:25:45.843 12:22:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.843 12:22:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.843 12:22:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.843 12:22:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.843 12:22:39 -- common/autotest_common.sh@10 -- # set +x 00:25:45.843 [2024-04-26 12:22:39.110966] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:25:45.843 [2024-04-26 12:22:39.111328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76419 ] 00:25:45.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.843 Zero copy mechanism will not be used. 00:25:45.843 [2024-04-26 12:22:39.251403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.101 [2024-04-26 12:22:39.372102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.674 12:22:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.674 12:22:40 -- common/autotest_common.sh@850 -- # return 0 00:25:46.674 12:22:40 -- host/digest.sh@86 -- # false 00:25:46.674 12:22:40 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:46.674 12:22:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:46.944 12:22:40 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.944 12:22:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.511 nvme0n1 00:25:47.511 12:22:40 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:47.511 12:22:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.511 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.511 Zero copy mechanism will not be used. 00:25:47.511 Running I/O for 2 seconds... 00:25:49.412 00:25:49.412 Latency(us) 00:25:49.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.412 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:49.412 nvme0n1 : 2.00 7509.35 938.67 0.00 0.00 2127.48 1891.61 5093.93 00:25:49.412 =================================================================================================================== 00:25:49.412 Total : 7509.35 938.67 0.00 0.00 2127.48 1891.61 5093.93 00:25:49.412 0 00:25:49.412 12:22:42 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:49.412 12:22:42 -- host/digest.sh@93 -- # get_accel_stats 00:25:49.412 12:22:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:49.412 12:22:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:49.412 12:22:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:49.412 | select(.opcode=="crc32c") 00:25:49.412 | "\(.module_name) \(.executed)"' 00:25:49.669 12:22:43 -- host/digest.sh@94 -- # false 00:25:49.669 12:22:43 -- host/digest.sh@94 -- # exp_module=software 00:25:49.669 12:22:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:49.669 12:22:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:49.669 12:22:43 -- host/digest.sh@98 -- # killprocess 76419 00:25:49.669 12:22:43 -- common/autotest_common.sh@936 -- # '[' -z 76419 ']' 00:25:49.669 12:22:43 -- common/autotest_common.sh@940 -- # kill -0 76419 00:25:49.669 12:22:43 -- common/autotest_common.sh@941 -- # uname 00:25:49.669 12:22:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.669 12:22:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76419 00:25:49.669 killing process with pid 76419 00:25:49.669 Received shutdown signal, test time was about 2.000000 seconds 00:25:49.669 00:25:49.669 Latency(us) 00:25:49.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.669 =================================================================================================================== 00:25:49.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.670 12:22:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:49.670 12:22:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:49.670 12:22:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76419' 00:25:49.670 12:22:43 -- common/autotest_common.sh@955 -- # kill 76419 00:25:49.670 12:22:43 -- common/autotest_common.sh@960 -- # wait 76419 00:25:49.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:49.928 12:22:43 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:49.928 12:22:43 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:49.928 12:22:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:49.928 12:22:43 -- host/digest.sh@80 -- # rw=randwrite 00:25:49.928 12:22:43 -- host/digest.sh@80 -- # bs=4096 00:25:49.928 12:22:43 -- host/digest.sh@80 -- # qd=128 00:25:49.928 12:22:43 -- host/digest.sh@80 -- # scan_dsa=false 00:25:49.928 12:22:43 -- host/digest.sh@83 -- # bperfpid=76479 00:25:49.928 12:22:43 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:49.928 12:22:43 -- host/digest.sh@84 -- # waitforlisten 76479 /var/tmp/bperf.sock 00:25:49.928 12:22:43 -- common/autotest_common.sh@817 -- # '[' -z 76479 ']' 00:25:49.928 12:22:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:49.928 12:22:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:49.928 12:22:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:49.928 12:22:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:49.928 12:22:43 -- common/autotest_common.sh@10 -- # set +x 00:25:50.265 [2024-04-26 12:22:43.422274] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:25:50.265 [2024-04-26 12:22:43.422503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76479 ] 00:25:50.265 [2024-04-26 12:22:43.556429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.265 [2024-04-26 12:22:43.656266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.218 12:22:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:51.218 12:22:44 -- common/autotest_common.sh@850 -- # return 0 00:25:51.218 12:22:44 -- host/digest.sh@86 -- # false 00:25:51.218 12:22:44 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:51.218 12:22:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:51.505 12:22:44 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.505 12:22:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.763 nvme0n1 00:25:51.763 12:22:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:51.763 12:22:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.763 Running I/O for 2 seconds... 00:25:54.290 00:25:54.290 Latency(us) 00:25:54.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.290 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:54.290 nvme0n1 : 2.01 15821.52 61.80 0.00 0.00 8083.79 2323.55 15252.01 00:25:54.290 =================================================================================================================== 00:25:54.290 Total : 15821.52 61.80 0.00 0.00 8083.79 2323.55 15252.01 00:25:54.290 0 00:25:54.290 12:22:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:54.290 12:22:47 -- host/digest.sh@93 -- # get_accel_stats 00:25:54.290 12:22:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:54.290 12:22:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:54.290 | select(.opcode=="crc32c") 00:25:54.290 | "\(.module_name) \(.executed)"' 00:25:54.290 12:22:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:54.290 12:22:47 -- host/digest.sh@94 -- # false 00:25:54.290 12:22:47 -- host/digest.sh@94 -- # exp_module=software 00:25:54.290 12:22:47 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:54.290 12:22:47 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:54.290 12:22:47 -- host/digest.sh@98 -- # killprocess 76479 00:25:54.290 12:22:47 -- common/autotest_common.sh@936 -- # '[' -z 76479 ']' 00:25:54.290 12:22:47 -- common/autotest_common.sh@940 -- # kill -0 76479 00:25:54.290 12:22:47 -- common/autotest_common.sh@941 -- # uname 00:25:54.290 12:22:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.290 12:22:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76479 00:25:54.290 killing process with pid 76479 00:25:54.290 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.290 00:25:54.290 Latency(us) 00:25:54.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.290 =================================================================================================================== 00:25:54.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.290 12:22:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:54.290 12:22:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:54.290 12:22:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76479' 00:25:54.290 12:22:47 -- common/autotest_common.sh@955 -- # kill 76479 00:25:54.290 12:22:47 -- common/autotest_common.sh@960 -- # wait 76479 00:25:54.548 12:22:47 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:54.548 12:22:47 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:54.548 12:22:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:54.548 12:22:47 -- host/digest.sh@80 -- # rw=randwrite 00:25:54.548 12:22:47 -- host/digest.sh@80 -- # bs=131072 00:25:54.548 12:22:47 -- host/digest.sh@80 -- # qd=16 00:25:54.548 12:22:47 -- host/digest.sh@80 -- # scan_dsa=false 00:25:54.548 12:22:47 -- host/digest.sh@83 -- # bperfpid=76534 00:25:54.548 12:22:47 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:54.548 12:22:47 -- host/digest.sh@84 -- # waitforlisten 76534 /var/tmp/bperf.sock 00:25:54.548 12:22:47 -- common/autotest_common.sh@817 -- # '[' -z 76534 ']' 00:25:54.548 12:22:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.548 12:22:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.548 12:22:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.548 12:22:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.548 12:22:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.548 [2024-04-26 12:22:47.818037] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:25:54.548 [2024-04-26 12:22:47.818413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76534 ] 00:25:54.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:54.548 Zero copy mechanism will not be used. 00:25:54.549 [2024-04-26 12:22:47.957571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.806 [2024-04-26 12:22:48.070660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.371 12:22:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.371 12:22:48 -- common/autotest_common.sh@850 -- # return 0 00:25:55.371 12:22:48 -- host/digest.sh@86 -- # false 00:25:55.371 12:22:48 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:55.371 12:22:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:55.935 12:22:49 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.935 12:22:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.192 nvme0n1 00:25:56.192 12:22:49 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:56.192 12:22:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.192 Zero copy mechanism will not be used. 00:25:56.192 Running I/O for 2 seconds... 00:25:58.722 00:25:58.722 Latency(us) 00:25:58.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.722 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:58.722 nvme0n1 : 2.00 5886.64 735.83 0.00 0.00 2712.34 2025.66 9651.67 00:25:58.722 =================================================================================================================== 00:25:58.722 Total : 5886.64 735.83 0.00 0.00 2712.34 2025.66 9651.67 00:25:58.722 0 00:25:58.722 12:22:51 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:58.722 12:22:51 -- host/digest.sh@93 -- # get_accel_stats 00:25:58.722 12:22:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:58.722 12:22:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:58.722 | select(.opcode=="crc32c") 00:25:58.722 | "\(.module_name) \(.executed)"' 00:25:58.722 12:22:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:58.722 12:22:51 -- host/digest.sh@94 -- # false 00:25:58.722 12:22:51 -- host/digest.sh@94 -- # exp_module=software 00:25:58.722 12:22:51 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:58.722 12:22:51 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:58.722 12:22:51 -- host/digest.sh@98 -- # killprocess 76534 00:25:58.722 12:22:51 -- common/autotest_common.sh@936 -- # '[' -z 76534 ']' 00:25:58.723 12:22:51 -- common/autotest_common.sh@940 -- # kill -0 76534 00:25:58.723 12:22:51 -- common/autotest_common.sh@941 -- # uname 00:25:58.723 12:22:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.723 12:22:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76534 00:25:58.723 killing process with pid 76534 00:25:58.723 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.723 00:25:58.723 Latency(us) 00:25:58.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.723 =================================================================================================================== 00:25:58.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.723 12:22:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:58.723 12:22:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:58.723 12:22:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76534' 00:25:58.723 12:22:51 -- common/autotest_common.sh@955 -- # kill 76534 00:25:58.723 12:22:51 -- common/autotest_common.sh@960 -- # wait 76534 00:25:58.980 12:22:52 -- host/digest.sh@132 -- # killprocess 76321 00:25:58.981 12:22:52 -- common/autotest_common.sh@936 -- # '[' -z 76321 ']' 00:25:58.981 12:22:52 -- common/autotest_common.sh@940 -- # kill -0 76321 00:25:58.981 12:22:52 -- common/autotest_common.sh@941 -- # uname 00:25:58.981 12:22:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.981 12:22:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76321 00:25:58.981 killing process with pid 76321 00:25:58.981 12:22:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:58.981 12:22:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:58.981 12:22:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76321' 00:25:58.981 12:22:52 -- common/autotest_common.sh@955 -- # kill 76321 00:25:58.981 12:22:52 -- common/autotest_common.sh@960 -- # wait 76321 00:25:59.239 ************************************ 00:25:59.239 END TEST nvmf_digest_clean 00:25:59.239 ************************************ 00:25:59.239 00:25:59.239 real 0m19.220s 00:25:59.239 user 0m37.616s 00:25:59.239 sys 0m4.650s 00:25:59.239 12:22:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:59.239 12:22:52 -- common/autotest_common.sh@10 -- # set +x 00:25:59.239 12:22:52 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:59.239 12:22:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:59.239 12:22:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.239 12:22:52 -- common/autotest_common.sh@10 -- # set +x 00:25:59.239 ************************************ 00:25:59.239 START TEST nvmf_digest_error 00:25:59.239 ************************************ 00:25:59.239 12:22:52 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:59.239 12:22:52 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:59.239 12:22:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:59.239 12:22:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:59.239 12:22:52 -- common/autotest_common.sh@10 -- # set +x 00:25:59.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.239 12:22:52 -- nvmf/common.sh@470 -- # nvmfpid=76631 00:25:59.239 12:22:52 -- nvmf/common.sh@471 -- # waitforlisten 76631 00:25:59.239 12:22:52 -- common/autotest_common.sh@817 -- # '[' -z 76631 ']' 00:25:59.239 12:22:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:59.239 12:22:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.239 12:22:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:59.239 12:22:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.239 12:22:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:59.239 12:22:52 -- common/autotest_common.sh@10 -- # set +x 00:25:59.239 [2024-04-26 12:22:52.656745] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:25:59.239 [2024-04-26 12:22:52.656851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.496 [2024-04-26 12:22:52.793375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.496 [2024-04-26 12:22:52.910840] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.496 [2024-04-26 12:22:52.910905] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.496 [2024-04-26 12:22:52.910923] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.496 [2024-04-26 12:22:52.910935] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.496 [2024-04-26 12:22:52.910947] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.496 [2024-04-26 12:22:52.910994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.428 12:22:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:00.428 12:22:53 -- common/autotest_common.sh@850 -- # return 0 00:26:00.428 12:22:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:00.428 12:22:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:00.428 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:00.428 12:22:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.428 12:22:53 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:00.428 12:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.428 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:00.428 [2024-04-26 12:22:53.639628] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:00.428 12:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.428 12:22:53 -- host/digest.sh@105 -- # common_target_config 00:26:00.428 12:22:53 -- host/digest.sh@43 -- # rpc_cmd 00:26:00.428 12:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.428 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:00.428 null0 00:26:00.428 [2024-04-26 12:22:53.754100] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.428 [2024-04-26 12:22:53.778305] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.428 12:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.428 12:22:53 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:00.428 12:22:53 -- host/digest.sh@54 -- # local rw bs qd 00:26:00.428 12:22:53 -- host/digest.sh@56 -- # rw=randread 00:26:00.428 12:22:53 -- host/digest.sh@56 -- # bs=4096 00:26:00.428 12:22:53 -- host/digest.sh@56 -- # qd=128 00:26:00.428 12:22:53 -- host/digest.sh@58 -- # bperfpid=76663 00:26:00.428 12:22:53 -- host/digest.sh@60 -- # waitforlisten 76663 /var/tmp/bperf.sock 00:26:00.428 12:22:53 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:00.428 12:22:53 -- common/autotest_common.sh@817 -- # '[' -z 76663 ']' 00:26:00.428 12:22:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.428 12:22:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:00.428 12:22:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.428 12:22:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:00.428 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:00.428 [2024-04-26 12:22:53.827751] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:26:00.428 [2024-04-26 12:22:53.828008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76663 ] 00:26:00.687 [2024-04-26 12:22:53.970038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.687 [2024-04-26 12:22:54.098137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.633 12:22:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:01.633 12:22:54 -- common/autotest_common.sh@850 -- # return 0 00:26:01.633 12:22:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.633 12:22:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.921 12:22:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:01.921 12:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.921 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:01.921 12:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.921 12:22:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.921 12:22:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.201 nvme0n1 00:26:02.201 12:22:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:02.201 12:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.201 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:02.201 12:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.201 12:22:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:02.201 12:22:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.201 Running I/O for 2 seconds... 00:26:02.201 [2024-04-26 12:22:55.661588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.201 [2024-04-26 12:22:55.661680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.201 [2024-04-26 12:22:55.661706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.460 [2024-04-26 12:22:55.679361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.460 [2024-04-26 12:22:55.679421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.460 [2024-04-26 12:22:55.679437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.460 [2024-04-26 12:22:55.696599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.460 [2024-04-26 12:22:55.696661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.460 [2024-04-26 12:22:55.696677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.460 [2024-04-26 12:22:55.713881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.460 [2024-04-26 12:22:55.713943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.460 [2024-04-26 12:22:55.713958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.460 [2024-04-26 12:22:55.731141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.460 [2024-04-26 12:22:55.731216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.460 [2024-04-26 12:22:55.731232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.460 [2024-04-26 12:22:55.748308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.460 [2024-04-26 12:22:55.748360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.460 [2024-04-26 12:22:55.748375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.460 [2024-04-26 12:22:55.765418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.460 [2024-04-26 12:22:55.765471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.765486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.782516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.782564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.782578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.800181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.800246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.800261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.817341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.817392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.817407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.834463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.834508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.834523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.851612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.851662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.851678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.868703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.868749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.868764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.885765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.885816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.885831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.902838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.902879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.902893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.461 [2024-04-26 12:22:55.919994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.461 [2024-04-26 12:22:55.920040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.461 [2024-04-26 12:22:55.920055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.719 [2024-04-26 12:22:55.937118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:55.937164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:55.937192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:55.954159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:55.954214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:55.954228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:55.971290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:55.971345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:55.971361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:55.988492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:55.988550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:55.988565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.005643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.005706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.005721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.022806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.022858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.022873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.040197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.040247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.040263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.057408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.057460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.057474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.074499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.074546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.074560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.091569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.091617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.091632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.108697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.108750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.108765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.125862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.125924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.143076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.143146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.160365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.160407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.160428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.720 [2024-04-26 12:22:56.177503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.720 [2024-04-26 12:22:56.177549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.720 [2024-04-26 12:22:56.177563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.194694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.194758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.194773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.212027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.212085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.212100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.229294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.229347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.229362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.246396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.246438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.246452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.263498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.263543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.263561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.281534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.281596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.299747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.299813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.299830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.317464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.317515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.317537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.334970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.335023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.335038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.352592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.352642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.352658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.370053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.370102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.370117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.387637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.387696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.406732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.406795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.406818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.424206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.424256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.424275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.979 [2024-04-26 12:22:56.441617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:02.979 [2024-04-26 12:22:56.441662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.979 [2024-04-26 12:22:56.441677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.459030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.459077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.459091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.477533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.477582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.477597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.495686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.495747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.495768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.514124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.514189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.514206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.531583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.531638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.531652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.548929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.548979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.548995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.566165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.566223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.566237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.584182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.584240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.584257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.601643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.601700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.601716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.618953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.619010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.619026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.636603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.237 [2024-04-26 12:22:56.636662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.237 [2024-04-26 12:22:56.636678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.237 [2024-04-26 12:22:56.654808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.238 [2024-04-26 12:22:56.654874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.238 [2024-04-26 12:22:56.654889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.238 [2024-04-26 12:22:56.672041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.238 [2024-04-26 12:22:56.672101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.238 [2024-04-26 12:22:56.672117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.238 [2024-04-26 12:22:56.689236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.238 [2024-04-26 12:22:56.689283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.238 [2024-04-26 12:22:56.689298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.706315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.706365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.706380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.723371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.723427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.723449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.740454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.740502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.740516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.764963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.765014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.765028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.782025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.782077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.782092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.799599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.799659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.799675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.817339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.817391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.817408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.835133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.835198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.835215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.852295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.852341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.852356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.869347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.869407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.886413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.886458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.886473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.903426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.903481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.903495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.920453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.920495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.920509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.937524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.937566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.937581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.496 [2024-04-26 12:22:56.954567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.496 [2024-04-26 12:22:56.954611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.496 [2024-04-26 12:22:56.954625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:56.971654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:56.971697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:56.971713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:56.988728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:56.988772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:56.988786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.005760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.005805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.005820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.022850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.022896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.022910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.040050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.040099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.040113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.057093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.057137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.057151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.074136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.074193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.074208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.091282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.091330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.091344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.108319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.108363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.108377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.125430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.125476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.125490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.143583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.143634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.754 [2024-04-26 12:22:57.143649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.754 [2024-04-26 12:22:57.160979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.754 [2024-04-26 12:22:57.161034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.755 [2024-04-26 12:22:57.161050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.755 [2024-04-26 12:22:57.178303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.755 [2024-04-26 12:22:57.178350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.755 [2024-04-26 12:22:57.178365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.755 [2024-04-26 12:22:57.196004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.755 [2024-04-26 12:22:57.196059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.755 [2024-04-26 12:22:57.196074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.755 [2024-04-26 12:22:57.215747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:03.755 [2024-04-26 12:22:57.215792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.755 [2024-04-26 12:22:57.215807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.235224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.235273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.235288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.254596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.254641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.254656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.279852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.279898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.279912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.296988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.297038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.297053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.314109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.314154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.314180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.331242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.331291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.331305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.348340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.348382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.365365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.365408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.365422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.382519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.382580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.399595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.399645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.399660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.417013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.417076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.417091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.434280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.434329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.434343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.451566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.451617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.451631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.013 [2024-04-26 12:22:57.468671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.013 [2024-04-26 12:22:57.468718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.013 [2024-04-26 12:22:57.468733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.279 [2024-04-26 12:22:57.485912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.279 [2024-04-26 12:22:57.485970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.279 [2024-04-26 12:22:57.485985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.279 [2024-04-26 12:22:57.503101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.279 [2024-04-26 12:22:57.503151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.279 [2024-04-26 12:22:57.503166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.520616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.520669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.520684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.537913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.537978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.555056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.555103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.555119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.572137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.572193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.572209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.589202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.589245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.589260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.606304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.606347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.606361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.623354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.623398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.623413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 [2024-04-26 12:22:57.640677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18d7360) 00:26:04.280 [2024-04-26 12:22:57.640734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.280 [2024-04-26 12:22:57.640749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.280 00:26:04.280 Latency(us) 00:26:04.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:04.280 nvme0n1 : 2.01 14511.10 56.68 0.00 0.00 8812.97 3559.80 33125.47 00:26:04.280 =================================================================================================================== 00:26:04.280 Total : 14511.10 56.68 0.00 0.00 8812.97 3559.80 33125.47 00:26:04.280 0 00:26:04.280 12:22:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:04.280 12:22:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:04.280 12:22:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:04.280 12:22:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:04.280 | .driver_specific 00:26:04.280 | .nvme_error 00:26:04.280 | .status_code 00:26:04.280 | .command_transient_transport_error' 00:26:04.557 12:22:57 -- host/digest.sh@71 -- # (( 114 > 0 )) 00:26:04.557 12:22:57 -- host/digest.sh@73 -- # killprocess 76663 00:26:04.557 12:22:57 -- common/autotest_common.sh@936 -- # '[' -z 76663 ']' 00:26:04.557 12:22:57 -- common/autotest_common.sh@940 -- # kill -0 76663 00:26:04.557 12:22:57 -- common/autotest_common.sh@941 -- # uname 00:26:04.557 12:22:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.557 12:22:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76663 00:26:04.557 killing process with pid 76663 00:26:04.557 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.557 00:26:04.557 Latency(us) 00:26:04.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.557 =================================================================================================================== 00:26:04.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.557 12:22:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:04.557 12:22:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:04.557 12:22:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76663' 00:26:04.557 12:22:57 -- common/autotest_common.sh@955 -- # kill 76663 00:26:04.557 12:22:57 -- common/autotest_common.sh@960 -- # wait 76663 00:26:04.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.834 12:22:58 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:04.834 12:22:58 -- host/digest.sh@54 -- # local rw bs qd 00:26:04.834 12:22:58 -- host/digest.sh@56 -- # rw=randread 00:26:04.834 12:22:58 -- host/digest.sh@56 -- # bs=131072 00:26:04.834 12:22:58 -- host/digest.sh@56 -- # qd=16 00:26:04.834 12:22:58 -- host/digest.sh@58 -- # bperfpid=76725 00:26:04.834 12:22:58 -- host/digest.sh@60 -- # waitforlisten 76725 /var/tmp/bperf.sock 00:26:04.834 12:22:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:04.834 12:22:58 -- common/autotest_common.sh@817 -- # '[' -z 76725 ']' 00:26:04.834 12:22:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.834 12:22:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:04.834 12:22:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.834 12:22:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:04.834 12:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:04.834 [2024-04-26 12:22:58.264615] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:26:04.834 [2024-04-26 12:22:58.264991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76725 ] 00:26:04.834 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.834 Zero copy mechanism will not be used. 00:26:05.144 [2024-04-26 12:22:58.401319] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.144 [2024-04-26 12:22:58.519932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.079 12:22:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:06.079 12:22:59 -- common/autotest_common.sh@850 -- # return 0 00:26:06.079 12:22:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.079 12:22:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.079 12:22:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:06.079 12:22:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.079 12:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:06.079 12:22:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.079 12:22:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.079 12:22:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.337 nvme0n1 00:26:06.337 12:22:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:06.337 12:22:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.337 12:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:06.337 12:22:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.337 12:22:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:06.337 12:22:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.596 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.596 Zero copy mechanism will not be used. 00:26:06.596 Running I/O for 2 seconds... 00:26:06.596 [2024-04-26 12:22:59.834332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.834389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.834405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.838597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.838651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.842771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.842811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.842826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.847005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.847044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.847057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.851181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.851220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.851233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.855389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.855428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.855442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.859620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.859661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.859675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.863886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.863956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.868149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.868215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.868230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.872553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.872592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.872605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.876783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.596 [2024-04-26 12:22:59.876823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.596 [2024-04-26 12:22:59.876836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.596 [2024-04-26 12:22:59.881029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.881070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.885154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.885209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.885223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.889338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.889378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.893549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.893591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.893605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.897825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.897866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.897879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.902069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.902110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.902123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.906283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.906322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.910535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.910574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.910588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.914772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.914811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.914825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.918965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.919005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.919019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.923353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.923391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.923405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.927644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.927694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.927707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.931953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.931993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.936301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.936340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.936353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.940673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.940711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.940741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.944987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.945027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.945041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.949252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.949290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.949304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.953429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.953468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.957682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.957722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.957735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.961985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.962025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.962039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.966186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.966224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.597 [2024-04-26 12:22:59.966237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.597 [2024-04-26 12:22:59.970479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.597 [2024-04-26 12:22:59.970517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.970531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:22:59.974760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:22:59.974801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.974814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:22:59.979038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:22:59.979078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.979091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:22:59.983363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:22:59.983401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.983414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:22:59.987558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:22:59.987596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.987609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:22:59.991875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:22:59.991917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.991930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:22:59.996228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:22:59.996265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:22:59.996278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.000444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.000484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.000497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.005473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.005512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.005525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.009733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.009773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.009803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.013991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.014030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.018274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.018311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.018324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.022488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.022525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.022538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.026828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.026867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.026880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.031154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.031205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.031219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.035339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.035375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.035387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.039474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.039511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.039524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.043821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.043857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.043887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.048145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.048196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.052459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.052510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.598 [2024-04-26 12:23:00.052523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.598 [2024-04-26 12:23:00.056799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.598 [2024-04-26 12:23:00.056840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.599 [2024-04-26 12:23:00.056869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.599 [2024-04-26 12:23:00.061099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.599 [2024-04-26 12:23:00.061149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.599 [2024-04-26 12:23:00.061162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.065430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.065469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.858 [2024-04-26 12:23:00.065482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.069826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.069866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.858 [2024-04-26 12:23:00.069879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.074129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.074189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.858 [2024-04-26 12:23:00.074204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.078437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.078476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.858 [2024-04-26 12:23:00.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.082734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.082771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.858 [2024-04-26 12:23:00.082799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.086963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.087001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.858 [2024-04-26 12:23:00.087029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.858 [2024-04-26 12:23:00.091139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.858 [2024-04-26 12:23:00.091191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.091206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.095327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.095364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.095377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.099467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.099503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.099516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.103753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.103790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.103804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.107963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.108002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.108015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.112243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.112283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.112297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.116388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.116428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.116441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.120563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.120602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.120615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.124745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.124785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.124798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.128996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.129037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.129050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.133116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.133155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.133186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.137335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.137374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.137387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.141541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.141580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.141595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.145784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.145836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.145853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.150059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.150100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.150114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.154243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.154282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.154295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.158462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.158502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.158516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.162600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.162638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.162651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.166768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.166805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.166833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.171038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.171078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.171092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.175275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.175329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.179584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.179623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.179636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.183729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.183769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.183783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.187866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.187902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.187915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.192135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.192189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.192203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.196392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.196430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.196444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.200725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.200767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.859 [2024-04-26 12:23:00.200781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.859 [2024-04-26 12:23:00.205033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.859 [2024-04-26 12:23:00.205073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.205087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.209277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.209316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.209329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.213490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.213529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.213542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.217797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.217843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.217856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.222181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.222236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.222249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.226408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.226446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.226459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.230632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.230669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.230682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.234936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.234974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.234987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.239158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.239209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.239223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.243439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.243484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.243497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.247622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.247666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.247679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.251843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.251896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.256054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.256093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.256122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.260303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.260342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.260355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.264506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.264544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.264557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.268781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.268821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.268834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.273094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.273135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.273149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.277434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.277474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.277488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.281754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.281797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.281810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.285903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.285943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.285956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.290122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.290165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.290197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.294409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.294448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.294461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.298600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.298640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.298654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.302969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.303009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.303023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.307243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.307281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.307295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.311580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.311619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.311633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.315877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.860 [2024-04-26 12:23:00.315917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.860 [2024-04-26 12:23:00.315930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.860 [2024-04-26 12:23:00.320148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.861 [2024-04-26 12:23:00.320204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.861 [2024-04-26 12:23:00.320218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.861 [2024-04-26 12:23:00.324376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:06.861 [2024-04-26 12:23:00.324416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.861 [2024-04-26 12:23:00.324429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.328619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.328659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.328672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.332767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.332807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.332820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.336955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.336994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.337008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.341221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.341259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.341272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.345350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.345388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.345402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.349518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.349570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.353678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.353718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.353731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.357917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.357956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.357983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.362155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.362210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.362223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.366404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.366444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.366457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.370474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.370512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.370526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.374685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.374725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.374737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.378886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.378925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.378938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.383128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.383167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.383198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.387464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.387504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.387517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.183 [2024-04-26 12:23:00.391686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.183 [2024-04-26 12:23:00.391724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.183 [2024-04-26 12:23:00.391737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.395865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.395914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.395927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.400215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.400255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.400269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.404358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.404397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.404411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.408530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.408569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.408582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.412684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.412722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.412735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.416929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.416969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.416983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.421206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.421258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.421286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.426120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.426167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.426198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.430447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.430488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.430502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.434645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.434686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.434699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.438811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.438851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.438865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.443073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.443127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.443141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.448104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.448165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.448219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.453605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.453651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.453666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.458627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.458691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.458716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.463695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.463739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.463754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.468137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.468197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.468212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.472617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.472677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.472701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.477072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.477136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.477155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.481514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.481553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.481567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.485996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.486038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.486052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.490388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.490429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.490442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.494591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.494631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.494645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.498890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.498930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.498943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.503246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.503305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.503319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.507688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.507728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.507742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.511880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.511920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.511933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.516148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.516217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.516231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.520425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.520464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.520478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.524608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.524648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.524661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.528772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.528812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.528825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.533024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.533065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.533078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.537339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.537378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.537392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.541453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.541492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.541505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.545602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.545641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.545655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.549765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.549805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.549819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.553895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.553935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.553948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.558125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.558165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.558195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.562354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.562393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.562407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.566592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.566632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.566644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.570836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.570877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.570890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.575035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.575075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.575089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.579299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.579459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.579477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.184 [2024-04-26 12:23:00.583816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.184 [2024-04-26 12:23:00.583860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.184 [2024-04-26 12:23:00.583874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.588112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.588153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.588167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.592421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.592462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.592476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.596628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.596668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.596682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.600787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.600826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.600839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.605048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.605088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.605102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.609264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.609303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.609316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.613384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.613424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.613437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.617606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.617645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.617658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.621814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.621854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.621868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.626073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.626112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.626125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.630360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.630399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.630412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.634616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.634655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.634669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.638920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.638960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.638973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.643156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.643206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.643220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.185 [2024-04-26 12:23:00.647434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.185 [2024-04-26 12:23:00.647482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.185 [2024-04-26 12:23:00.647496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.651701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.651753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.655863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.655902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.660044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.660083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.664272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.664312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.668501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.668539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.672690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.672748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.677128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.677187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.677203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.681279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.681319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.681333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.685927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.685976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.685990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.690184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.690235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.690249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.694466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.694507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.694522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.698674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.698715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.698728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.703065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.703115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.703136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.707394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.707435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.707460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.711905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.711950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.711964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.716926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.716993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.717021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.722679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.722742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.722766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.728230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.728294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.728319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.734217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.734279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.739272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.739316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.739330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.744075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.744132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.744157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.748639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.748687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.748701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.753070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.753122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.753136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.757572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.443 [2024-04-26 12:23:00.757653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.443 [2024-04-26 12:23:00.757676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.443 [2024-04-26 12:23:00.762357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.762416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.762431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.766921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.766972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.766985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.771592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.771647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.771662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.775959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.776005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.780573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.780622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.784959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.785008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.789637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.789703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.789727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.794219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.794269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.794292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.799507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.799575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.799596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.804822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.804906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.804923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.809393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.809446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.809461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.813828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.813889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.813914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.818316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.818387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.818402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.822708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.822755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.822770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.826956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.826995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.827009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.831213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.831250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.831263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.835441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.835489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.835503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.839660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.839701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.839714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.843771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.843811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.843823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.848212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.848255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.848269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.852485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.852526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.852539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.856844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.856902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.856915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.861150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.861197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.861211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.865327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.865370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.865383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.869533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.869588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.869602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.873765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.873806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.873819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.877997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.878041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.878053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.882292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.882329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.882341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.886512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.886553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.886566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.890739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.890778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.890791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.894921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.894962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.894974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.899290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.899334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.899347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.903747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.903813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.903827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.444 [2024-04-26 12:23:00.908045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.444 [2024-04-26 12:23:00.908082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.444 [2024-04-26 12:23:00.908096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.912542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.912583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.912595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.916993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.917037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.917051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.921317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.921361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.921374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.925503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.925552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.925566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.929808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.929861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.929875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.934108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.934184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.934199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.938507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.938552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.938567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.942683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.942727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.942741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.946822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.946865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.946878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.951028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.951071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.951083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.955321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.955363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.955376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.959973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.960014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.960028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.964288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.964331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.964344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.968530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.968572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.968585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.972782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.705 [2024-04-26 12:23:00.972825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.705 [2024-04-26 12:23:00.972839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.705 [2024-04-26 12:23:00.977048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:00.977090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:00.977103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:00.981271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:00.981313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:00.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:00.985461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:00.985517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:00.985530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:00.989629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:00.989671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:00.989685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:00.993787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:00.993828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:00.993840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:00.998069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:00.998110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:00.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.002259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.002301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.002313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.006393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.006434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.006448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.010646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.010685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.010698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.014801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.014840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.014853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.019060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.019100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.019113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.023263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.023298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.023311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.027463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.027498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.027510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.031699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.031739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.031751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.035913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.035969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.035991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.040231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.040271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.040285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.044450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.044491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.048593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.048631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.048644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.052933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.052974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.052986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.057158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.057210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.057223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.061464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.061509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.061522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.065645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.065692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.065704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.069930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.706 [2024-04-26 12:23:01.069976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.706 [2024-04-26 12:23:01.069989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.706 [2024-04-26 12:23:01.074207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.074248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.074261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.078445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.078495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.078508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.082726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.082768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.082781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.086981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.087021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.087034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.091158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.091207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.091220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.095306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.095346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.099522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.099560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.099572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.103723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.103764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.103776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.107908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.107958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.107970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.112135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.112190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.112205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.116301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.116343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.116355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.120522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.120563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.120575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.124720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.124765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.124778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.128883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.128926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.128939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.134154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.134218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.134232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.138642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.138696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.138710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.143659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.143716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.143731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.148257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.148318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.148333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.152687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.152734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.152748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.157133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.157194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.157210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.162945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.163032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.163060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.707 [2024-04-26 12:23:01.168556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.707 [2024-04-26 12:23:01.168621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.707 [2024-04-26 12:23:01.168643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.967 [2024-04-26 12:23:01.174121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.967 [2024-04-26 12:23:01.174197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.967 [2024-04-26 12:23:01.174220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.967 [2024-04-26 12:23:01.180007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.967 [2024-04-26 12:23:01.180094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.967 [2024-04-26 12:23:01.180133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.967 [2024-04-26 12:23:01.185644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.967 [2024-04-26 12:23:01.185708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.967 [2024-04-26 12:23:01.185731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.967 [2024-04-26 12:23:01.191290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.967 [2024-04-26 12:23:01.191346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.967 [2024-04-26 12:23:01.191366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.967 [2024-04-26 12:23:01.195644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.967 [2024-04-26 12:23:01.195690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.967 [2024-04-26 12:23:01.195704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.200009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.200055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.200069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.204498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.204543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.204557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.208854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.208900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.208914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.213189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.213231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.213245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.217595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.217640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.217654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.222192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.222237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.222252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.226757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.226799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.226813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.231131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.231196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.231209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.235538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.235580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.235593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.239770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.239818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.239831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.244119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.244163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.244190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.248497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.248544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.248557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.252691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.252736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.252749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.256950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.256995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.257008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.261359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.261409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.261423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.265665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.265714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.265728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.269949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.269995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.270008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.274399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.274468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.274484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.278802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.278866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.278897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.283820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.283882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.283897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.288257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.288313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.288328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.292660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.292705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.292718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.297024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.297067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.297080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.301377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.301423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.301436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.305614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.305658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.968 [2024-04-26 12:23:01.305671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.968 [2024-04-26 12:23:01.309783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.968 [2024-04-26 12:23:01.309825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.309838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.314197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.314262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.314292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.318604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.318644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.318657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.322928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.322970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.322983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.327255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.327294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.327307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.331602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.331643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.331656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.335935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.335997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.336011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.340230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.340278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.340291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.344494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.344541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.344554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.348605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.348651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.348664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.353166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.353227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.353242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.357516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.357577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.357598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.362043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.362089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.362103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.366525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.366568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.366582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.371663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.371732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.371756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.377741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.377831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.377867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.383508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.383578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.383601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.389361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.389430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.389451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.395162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.395254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.395279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.400801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.400869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.400893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.406635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.406735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.411731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.411780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.411794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.416455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.416500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.416514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.420823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.420867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.420881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.425315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.425359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.425373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.429717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.429760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.429773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.969 [2024-04-26 12:23:01.434144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:07.969 [2024-04-26 12:23:01.434198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.969 [2024-04-26 12:23:01.434212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.438510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.438553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.438567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.442663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.442704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.442717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.446955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.446998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.447011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.451261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.451305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.451318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.456354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.456415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.456429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.460819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.460873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.460887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.465166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.465233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.465246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.470127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.470201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.470218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.474629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.474670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.474684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.479038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.479080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.479094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.483504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.483545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.483558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.487793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.487836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.487849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.492134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.492186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.492201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.496486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.496527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.496540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.500752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.500795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.500808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.504991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.505033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.505045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.509270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.509310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.229 [2024-04-26 12:23:01.509325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.229 [2024-04-26 12:23:01.513401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.229 [2024-04-26 12:23:01.513451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.513464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.517614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.517662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.517676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.522279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.522331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.522346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.526673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.526717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.526730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.530995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.531051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.531081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.535396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.535435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.535458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.539777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.539824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.539838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.544705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.544752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.544767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.549135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.549191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.549206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.553542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.553587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.553600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.557958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.558019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.558032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.562447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.562492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.562505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.566825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.566869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.566882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.571232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.571291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.575630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.575674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.575687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.579797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.579840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.579853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.583909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.583948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.583961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.588125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.588162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.588187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.592246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.592288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.592301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.597080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.597121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.597134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.601615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.601660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.601674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.606678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.606723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.606737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.611534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.611579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.616464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.616522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.621090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.621138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.621152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.625365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.625409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.625423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.629614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.629658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.230 [2024-04-26 12:23:01.629671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.230 [2024-04-26 12:23:01.634099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.230 [2024-04-26 12:23:01.634143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.634157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.638415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.638454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.642677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.642719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.642731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.646913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.646953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.646965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.651202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.651240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.651252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.655360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.655397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.655410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.659744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.659786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.659799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.664069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.664108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.664121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.668395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.668434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.668447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.672742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.672786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.672799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.676956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.676998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.677011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.681074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.681114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.681127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.685325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.685363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.685377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.689638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.689694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.689707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.231 [2024-04-26 12:23:01.694217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.231 [2024-04-26 12:23:01.694259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.231 [2024-04-26 12:23:01.694274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.698456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.698501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.698514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.702783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.702836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.702850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.707109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.707160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.707187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.711394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.711439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.711465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.715614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.715653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.715667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.719866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.719907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.719920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.724035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.724077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.724090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.728275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.728315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.728328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.732469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.732510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.732523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.736858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.736902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.736915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.741189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.741229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.741242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.745442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.745484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.745496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.749719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.749760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.749773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.754044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.754101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.758280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.758320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.758334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.762465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.762506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.762519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.766674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.766714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.766728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.770948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.770986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.770999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.775221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.775259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.775272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.779382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.779419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.779431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.783600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.783638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.783651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.788018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.788061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.788075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.792264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.792304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.792316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.796478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.796519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.796532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.800673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.800718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.800731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.804915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.490 [2024-04-26 12:23:01.804966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.490 [2024-04-26 12:23:01.804980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.490 [2024-04-26 12:23:01.809130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.491 [2024-04-26 12:23:01.809185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.491 [2024-04-26 12:23:01.809200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.491 [2024-04-26 12:23:01.813347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.491 [2024-04-26 12:23:01.813387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.491 [2024-04-26 12:23:01.813400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.491 [2024-04-26 12:23:01.817505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.491 [2024-04-26 12:23:01.817544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.491 [2024-04-26 12:23:01.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.491 [2024-04-26 12:23:01.821669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.491 [2024-04-26 12:23:01.821710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.491 [2024-04-26 12:23:01.821723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.491 [2024-04-26 12:23:01.825862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.491 [2024-04-26 12:23:01.825902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.491 [2024-04-26 12:23:01.825914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.491 [2024-04-26 12:23:01.829885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2138820) 00:26:08.491 [2024-04-26 12:23:01.829924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.491 [2024-04-26 12:23:01.829936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.491 00:26:08.491 Latency(us) 00:26:08.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.491 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:08.491 nvme0n1 : 2.00 7080.19 885.02 0.00 0.00 2256.33 1906.50 6285.50 00:26:08.491 =================================================================================================================== 00:26:08.491 Total : 7080.19 885.02 0.00 0.00 2256.33 1906.50 6285.50 00:26:08.491 0 00:26:08.491 12:23:01 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:08.491 12:23:01 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:08.491 12:23:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:08.491 12:23:01 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:08.491 | .driver_specific 00:26:08.491 | .nvme_error 00:26:08.491 | .status_code 00:26:08.491 | .command_transient_transport_error' 00:26:08.750 12:23:02 -- host/digest.sh@71 -- # (( 457 > 0 )) 00:26:08.750 12:23:02 -- host/digest.sh@73 -- # killprocess 76725 00:26:08.750 12:23:02 -- common/autotest_common.sh@936 -- # '[' -z 76725 ']' 00:26:08.750 12:23:02 -- common/autotest_common.sh@940 -- # kill -0 76725 00:26:08.750 12:23:02 -- common/autotest_common.sh@941 -- # uname 00:26:08.750 12:23:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.750 12:23:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76725 00:26:08.750 killing process with pid 76725 00:26:08.750 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.750 00:26:08.750 Latency(us) 00:26:08.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.750 =================================================================================================================== 00:26:08.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.750 12:23:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:08.750 12:23:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:08.750 12:23:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76725' 00:26:08.750 12:23:02 -- common/autotest_common.sh@955 -- # kill 76725 00:26:08.750 12:23:02 -- common/autotest_common.sh@960 -- # wait 76725 00:26:09.007 12:23:02 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:09.007 12:23:02 -- host/digest.sh@54 -- # local rw bs qd 00:26:09.007 12:23:02 -- host/digest.sh@56 -- # rw=randwrite 00:26:09.007 12:23:02 -- host/digest.sh@56 -- # bs=4096 00:26:09.007 12:23:02 -- host/digest.sh@56 -- # qd=128 00:26:09.007 12:23:02 -- host/digest.sh@58 -- # bperfpid=76785 00:26:09.007 12:23:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:09.007 12:23:02 -- host/digest.sh@60 -- # waitforlisten 76785 /var/tmp/bperf.sock 00:26:09.007 12:23:02 -- common/autotest_common.sh@817 -- # '[' -z 76785 ']' 00:26:09.007 12:23:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.007 12:23:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:09.007 12:23:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.007 12:23:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:09.007 12:23:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.265 [2024-04-26 12:23:02.486613] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:26:09.265 [2024-04-26 12:23:02.486729] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76785 ] 00:26:09.265 [2024-04-26 12:23:02.625811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.524 [2024-04-26 12:23:02.739995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.088 12:23:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:10.088 12:23:03 -- common/autotest_common.sh@850 -- # return 0 00:26:10.088 12:23:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.088 12:23:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.345 12:23:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:10.345 12:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.345 12:23:03 -- common/autotest_common.sh@10 -- # set +x 00:26:10.345 12:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.345 12:23:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.345 12:23:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.603 nvme0n1 00:26:10.603 12:23:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:10.603 12:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.603 12:23:04 -- common/autotest_common.sh@10 -- # set +x 00:26:10.603 12:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.603 12:23:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:10.603 12:23:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.860 Running I/O for 2 seconds... 00:26:10.860 [2024-04-26 12:23:04.166256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fef90 00:26:10.860 [2024-04-26 12:23:04.168929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.860 [2024-04-26 12:23:04.168977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.860 [2024-04-26 12:23:04.182595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190feb58 00:26:10.860 [2024-04-26 12:23:04.185220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.860 [2024-04-26 12:23:04.185261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.198899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fe2e8 00:26:10.861 [2024-04-26 12:23:04.201532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.201592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.215940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fda78 00:26:10.861 [2024-04-26 12:23:04.218529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.218573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.232312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fd208 00:26:10.861 [2024-04-26 12:23:04.234864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.234903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.248597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fc998 00:26:10.861 [2024-04-26 12:23:04.251115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.251151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.264783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fc128 00:26:10.861 [2024-04-26 12:23:04.267286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.267320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.281001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fb8b8 00:26:10.861 [2024-04-26 12:23:04.283497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.283526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.297236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fb048 00:26:10.861 [2024-04-26 12:23:04.299705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.299747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.861 [2024-04-26 12:23:04.313465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fa7d8 00:26:10.861 [2024-04-26 12:23:04.315905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.861 [2024-04-26 12:23:04.315946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.329599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f9f68 00:26:11.119 [2024-04-26 12:23:04.332013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.332052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.345817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f96f8 00:26:11.119 [2024-04-26 12:23:04.348240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.348281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.362302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f8e88 00:26:11.119 [2024-04-26 12:23:04.364694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.364742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.378578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f8618 00:26:11.119 [2024-04-26 12:23:04.380976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.394928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f7da8 00:26:11.119 [2024-04-26 12:23:04.397275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.397314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.411124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f7538 00:26:11.119 [2024-04-26 12:23:04.413439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.413479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.427466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f6cc8 00:26:11.119 [2024-04-26 12:23:04.429763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.429800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.443813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f6458 00:26:11.119 [2024-04-26 12:23:04.446144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.446190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.460182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f5be8 00:26:11.119 [2024-04-26 12:23:04.462432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.462480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.476427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f5378 00:26:11.119 [2024-04-26 12:23:04.478664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.478700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.492649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f4b08 00:26:11.119 [2024-04-26 12:23:04.494873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.494909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:11.119 [2024-04-26 12:23:04.508998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f4298 00:26:11.119 [2024-04-26 12:23:04.511218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.119 [2024-04-26 12:23:04.511256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:11.120 [2024-04-26 12:23:04.525405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f3a28 00:26:11.120 [2024-04-26 12:23:04.527613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.120 [2024-04-26 12:23:04.527654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:11.120 [2024-04-26 12:23:04.541783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f31b8 00:26:11.120 [2024-04-26 12:23:04.543964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.120 [2024-04-26 12:23:04.544003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:11.120 [2024-04-26 12:23:04.558163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f2948 00:26:11.120 [2024-04-26 12:23:04.560318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.120 [2024-04-26 12:23:04.560356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:11.120 [2024-04-26 12:23:04.574415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f20d8 00:26:11.120 [2024-04-26 12:23:04.576533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.120 [2024-04-26 12:23:04.576573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.590670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f1868 00:26:11.378 [2024-04-26 12:23:04.592762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.592796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.606891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f0ff8 00:26:11.378 [2024-04-26 12:23:04.608999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.609039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.623272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f0788 00:26:11.378 [2024-04-26 12:23:04.625343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.625387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.639627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eff18 00:26:11.378 [2024-04-26 12:23:04.641665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.641708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.656089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ef6a8 00:26:11.378 [2024-04-26 12:23:04.658106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.658144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.672474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eee38 00:26:11.378 [2024-04-26 12:23:04.674477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.674515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.689003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ee5c8 00:26:11.378 [2024-04-26 12:23:04.691016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.691053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.705472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190edd58 00:26:11.378 [2024-04-26 12:23:04.707429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.707478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.721997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ed4e8 00:26:11.378 [2024-04-26 12:23:04.723943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.723983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.738387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ecc78 00:26:11.378 [2024-04-26 12:23:04.740318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.740358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.754877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ec408 00:26:11.378 [2024-04-26 12:23:04.756790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.756830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.771257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ebb98 00:26:11.378 [2024-04-26 12:23:04.773137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.787771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eb328 00:26:11.378 [2024-04-26 12:23:04.789627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.789665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.804165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eaab8 00:26:11.378 [2024-04-26 12:23:04.805987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.806026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.820558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ea248 00:26:11.378 [2024-04-26 12:23:04.822369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.822408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:11.378 [2024-04-26 12:23:04.836863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e99d8 00:26:11.378 [2024-04-26 12:23:04.838651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.378 [2024-04-26 12:23:04.838687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:11.638 [2024-04-26 12:23:04.853033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e9168 00:26:11.638 [2024-04-26 12:23:04.854785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.638 [2024-04-26 12:23:04.854820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:11.638 [2024-04-26 12:23:04.869372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e88f8 00:26:11.638 [2024-04-26 12:23:04.871092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.638 [2024-04-26 12:23:04.871130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:11.638 [2024-04-26 12:23:04.885643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e8088 00:26:11.638 [2024-04-26 12:23:04.887345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.638 [2024-04-26 12:23:04.887382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:11.638 [2024-04-26 12:23:04.901870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e7818 00:26:11.638 [2024-04-26 12:23:04.903565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:04.903601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:04.918207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e6fa8 00:26:11.639 [2024-04-26 12:23:04.919875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:04.919916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:04.934455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e6738 00:26:11.639 [2024-04-26 12:23:04.936112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:04.936150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:04.950798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e5ec8 00:26:11.639 [2024-04-26 12:23:04.952460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:04.952496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:04.967154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e5658 00:26:11.639 [2024-04-26 12:23:04.968780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:04.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:04.983495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e4de8 00:26:11.639 [2024-04-26 12:23:04.985080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:04.985117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:04.999861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e4578 00:26:11.639 [2024-04-26 12:23:05.001449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.001490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:05.016295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e3d08 00:26:11.639 [2024-04-26 12:23:05.017837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.017876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:05.032655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e3498 00:26:11.639 [2024-04-26 12:23:05.034198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.034232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:05.049022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e2c28 00:26:11.639 [2024-04-26 12:23:05.050547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.050583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:05.065389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e23b8 00:26:11.639 [2024-04-26 12:23:05.066883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.066921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:05.081749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e1b48 00:26:11.639 [2024-04-26 12:23:05.083221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.083256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:11.639 [2024-04-26 12:23:05.098390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e12d8 00:26:11.639 [2024-04-26 12:23:05.099846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.639 [2024-04-26 12:23:05.099888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.115367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e0a68 00:26:11.906 [2024-04-26 12:23:05.116791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.116834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.131837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e01f8 00:26:11.906 [2024-04-26 12:23:05.133261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.133302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.148330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190df988 00:26:11.906 [2024-04-26 12:23:05.149714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.149752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.164643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190df118 00:26:11.906 [2024-04-26 12:23:05.166007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.166044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.181107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190de8a8 00:26:11.906 [2024-04-26 12:23:05.182457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.182495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.197393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190de038 00:26:11.906 [2024-04-26 12:23:05.198701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.198735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.220396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190de038 00:26:11.906 [2024-04-26 12:23:05.223022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.223061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.236716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190de8a8 00:26:11.906 [2024-04-26 12:23:05.239315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.239360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.252918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190df118 00:26:11.906 [2024-04-26 12:23:05.255515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.255551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.269185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190df988 00:26:11.906 [2024-04-26 12:23:05.271779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.271820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.285548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e01f8 00:26:11.906 [2024-04-26 12:23:05.288103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.288143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.301843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e0a68 00:26:11.906 [2024-04-26 12:23:05.304364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.906 [2024-04-26 12:23:05.304407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:11.906 [2024-04-26 12:23:05.318159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e12d8 00:26:11.906 [2024-04-26 12:23:05.320678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.907 [2024-04-26 12:23:05.320720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:11.907 [2024-04-26 12:23:05.334411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e1b48 00:26:11.907 [2024-04-26 12:23:05.336888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.907 [2024-04-26 12:23:05.336928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:11.907 [2024-04-26 12:23:05.350648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e23b8 00:26:11.907 [2024-04-26 12:23:05.353133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.907 [2024-04-26 12:23:05.353194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.367152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e2c28 00:26:12.171 [2024-04-26 12:23:05.369601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.369653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.383444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e3498 00:26:12.171 [2024-04-26 12:23:05.385867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.385905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.399712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e3d08 00:26:12.171 [2024-04-26 12:23:05.402098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.402136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.416051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e4578 00:26:12.171 [2024-04-26 12:23:05.418421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.418461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.432382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e4de8 00:26:12.171 [2024-04-26 12:23:05.434717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.434757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.448610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e5658 00:26:12.171 [2024-04-26 12:23:05.450937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.450973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.464853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e5ec8 00:26:12.171 [2024-04-26 12:23:05.467151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.467195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.481143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e6738 00:26:12.171 [2024-04-26 12:23:05.483422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.483466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.497318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e6fa8 00:26:12.171 [2024-04-26 12:23:05.499575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.499610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.513553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e7818 00:26:12.171 [2024-04-26 12:23:05.515809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.515854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.529816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e8088 00:26:12.171 [2024-04-26 12:23:05.532042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.532080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.546050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e88f8 00:26:12.171 [2024-04-26 12:23:05.548262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.548301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.562252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e9168 00:26:12.171 [2024-04-26 12:23:05.564418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.564458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.578468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190e99d8 00:26:12.171 [2024-04-26 12:23:05.580642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.580680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.594940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ea248 00:26:12.171 [2024-04-26 12:23:05.597089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.171 [2024-04-26 12:23:05.597129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:12.171 [2024-04-26 12:23:05.611273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eaab8 00:26:12.172 [2024-04-26 12:23:05.613458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.172 [2024-04-26 12:23:05.613493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:12.172 [2024-04-26 12:23:05.627556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eb328 00:26:12.172 [2024-04-26 12:23:05.629637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.172 [2024-04-26 12:23:05.629671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.643735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ebb98 00:26:12.441 [2024-04-26 12:23:05.645797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.645830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.659967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ec408 00:26:12.441 [2024-04-26 12:23:05.662015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.662049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.676359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ecc78 00:26:12.441 [2024-04-26 12:23:05.678393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.678428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.692616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ed4e8 00:26:12.441 [2024-04-26 12:23:05.694621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.694655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.708814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190edd58 00:26:12.441 [2024-04-26 12:23:05.710804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.710837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.725028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ee5c8 00:26:12.441 [2024-04-26 12:23:05.726995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.727032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.741167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eee38 00:26:12.441 [2024-04-26 12:23:05.743098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.743133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.757381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190ef6a8 00:26:12.441 [2024-04-26 12:23:05.759312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.759348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.773582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190eff18 00:26:12.441 [2024-04-26 12:23:05.775491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.775528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:12.441 [2024-04-26 12:23:05.789823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f0788 00:26:12.441 [2024-04-26 12:23:05.791736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.441 [2024-04-26 12:23:05.791774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:12.442 [2024-04-26 12:23:05.806057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f0ff8 00:26:12.442 [2024-04-26 12:23:05.807932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.442 [2024-04-26 12:23:05.807969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:12.442 [2024-04-26 12:23:05.822302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f1868 00:26:12.442 [2024-04-26 12:23:05.824153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.442 [2024-04-26 12:23:05.824197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:12.442 [2024-04-26 12:23:05.838533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f20d8 00:26:12.442 [2024-04-26 12:23:05.840375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.442 [2024-04-26 12:23:05.840413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:12.442 [2024-04-26 12:23:05.854808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f2948 00:26:12.442 [2024-04-26 12:23:05.856649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.442 [2024-04-26 12:23:05.856684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:12.442 [2024-04-26 12:23:05.871028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f31b8 00:26:12.442 [2024-04-26 12:23:05.872817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.442 [2024-04-26 12:23:05.872853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:12.442 [2024-04-26 12:23:05.887262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f3a28 00:26:12.442 [2024-04-26 12:23:05.889042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.442 [2024-04-26 12:23:05.889079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:05.903462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f4298 00:26:12.709 [2024-04-26 12:23:05.905221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:05.905260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:05.919730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f4b08 00:26:12.709 [2024-04-26 12:23:05.921450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:05.921487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:05.936003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f5378 00:26:12.709 [2024-04-26 12:23:05.937706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:05.937741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:05.952210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f5be8 00:26:12.709 [2024-04-26 12:23:05.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:05.953911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:05.968412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f6458 00:26:12.709 [2024-04-26 12:23:05.970049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:05.970084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:05.984582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f6cc8 00:26:12.709 [2024-04-26 12:23:05.986212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:05.986242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.000783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f7538 00:26:12.709 [2024-04-26 12:23:06.002393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.002427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.017006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f7da8 00:26:12.709 [2024-04-26 12:23:06.018594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.018627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.033195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f8618 00:26:12.709 [2024-04-26 12:23:06.034749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.034784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.049344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f8e88 00:26:12.709 [2024-04-26 12:23:06.050880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.050916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.065564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f96f8 00:26:12.709 [2024-04-26 12:23:06.067093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.067129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.081769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190f9f68 00:26:12.709 [2024-04-26 12:23:06.083270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.083305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:12.709 [2024-04-26 12:23:06.097970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fa7d8 00:26:12.709 [2024-04-26 12:23:06.099470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.709 [2024-04-26 12:23:06.099509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:12.710 [2024-04-26 12:23:06.114343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fb048 00:26:12.710 [2024-04-26 12:23:06.115842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.710 [2024-04-26 12:23:06.115906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:12.710 [2024-04-26 12:23:06.131024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fb8b8 00:26:12.710 [2024-04-26 12:23:06.132484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.710 [2024-04-26 12:23:06.132525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:12.710 [2024-04-26 12:23:06.147310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23455d0) with pdu=0x2000190fc128 00:26:12.710 [2024-04-26 12:23:06.148733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.710 [2024-04-26 12:23:06.148772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:12.710 00:26:12.710 Latency(us) 00:26:12.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:12.710 nvme0n1 : 2.01 15521.04 60.63 0.00 0.00 8240.20 2412.92 31457.28 00:26:12.710 =================================================================================================================== 00:26:12.710 Total : 15521.04 60.63 0.00 0.00 8240.20 2412.92 31457.28 00:26:12.710 0 00:26:12.976 12:23:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:12.976 12:23:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:12.976 12:23:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:12.976 12:23:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:12.976 | .driver_specific 00:26:12.976 | .nvme_error 00:26:12.976 | .status_code 00:26:12.976 | .command_transient_transport_error' 00:26:13.244 12:23:06 -- host/digest.sh@71 -- # (( 122 > 0 )) 00:26:13.244 12:23:06 -- host/digest.sh@73 -- # killprocess 76785 00:26:13.244 12:23:06 -- common/autotest_common.sh@936 -- # '[' -z 76785 ']' 00:26:13.244 12:23:06 -- common/autotest_common.sh@940 -- # kill -0 76785 00:26:13.244 12:23:06 -- common/autotest_common.sh@941 -- # uname 00:26:13.244 12:23:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:13.244 12:23:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76785 00:26:13.244 killing process with pid 76785 00:26:13.244 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.244 00:26:13.244 Latency(us) 00:26:13.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.244 =================================================================================================================== 00:26:13.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.244 12:23:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:13.244 12:23:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:13.244 12:23:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76785' 00:26:13.244 12:23:06 -- common/autotest_common.sh@955 -- # kill 76785 00:26:13.244 12:23:06 -- common/autotest_common.sh@960 -- # wait 76785 00:26:13.513 12:23:06 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:13.513 12:23:06 -- host/digest.sh@54 -- # local rw bs qd 00:26:13.513 12:23:06 -- host/digest.sh@56 -- # rw=randwrite 00:26:13.513 12:23:06 -- host/digest.sh@56 -- # bs=131072 00:26:13.513 12:23:06 -- host/digest.sh@56 -- # qd=16 00:26:13.513 12:23:06 -- host/digest.sh@58 -- # bperfpid=76844 00:26:13.513 12:23:06 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:13.513 12:23:06 -- host/digest.sh@60 -- # waitforlisten 76844 /var/tmp/bperf.sock 00:26:13.513 12:23:06 -- common/autotest_common.sh@817 -- # '[' -z 76844 ']' 00:26:13.513 12:23:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.513 12:23:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:13.513 12:23:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.513 12:23:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:13.513 12:23:06 -- common/autotest_common.sh@10 -- # set +x 00:26:13.513 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:13.513 Zero copy mechanism will not be used. 00:26:13.513 [2024-04-26 12:23:06.783934] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:26:13.513 [2024-04-26 12:23:06.784050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76844 ] 00:26:13.513 [2024-04-26 12:23:06.921634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.779 [2024-04-26 12:23:07.038658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.349 12:23:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:14.349 12:23:07 -- common/autotest_common.sh@850 -- # return 0 00:26:14.349 12:23:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.349 12:23:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.606 12:23:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.606 12:23:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.606 12:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:14.606 12:23:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.607 12:23:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.607 12:23:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.865 nvme0n1 00:26:14.865 12:23:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:14.865 12:23:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.865 12:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:14.865 12:23:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.865 12:23:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.865 12:23:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.132 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.132 Zero copy mechanism will not be used. 00:26:15.132 Running I/O for 2 seconds... 00:26:15.132 [2024-04-26 12:23:08.459613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.459935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.459965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.464868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.465165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.465208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.470067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.470387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.470422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.475290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.475600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.475633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.480463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.480761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.480794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.485653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.485954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.485995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.490878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.491188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.491220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.496185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.496484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.496516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.501393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.501691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.501723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.506590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.506889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.506931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.511834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.512132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.512164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.517049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.517362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.517394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.522284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.522585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.522619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.527586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.527923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.532818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.533124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.533184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.538082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.538396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.538429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.543293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.543603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.132 [2024-04-26 12:23:08.543646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.132 [2024-04-26 12:23:08.548535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.132 [2024-04-26 12:23:08.548839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.548871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.553790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.554090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.554123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.559003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.559317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.559350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.564225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.564522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.564554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.569441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.569741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.569775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.574628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.574929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.574962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.579859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.580161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.580207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.585093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.585406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.133 [2024-04-26 12:23:08.590259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.133 [2024-04-26 12:23:08.590556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.133 [2024-04-26 12:23:08.590589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.595467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.595764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.595798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.600673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.601006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.605946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.606301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.611182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.611490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.611518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.616387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.616714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.621656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.621949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.621984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.626639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.626715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.626738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.631822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.631896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.631920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.636982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.637055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.637079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.642191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.642264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.642288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.647415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.647494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.647517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.652598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.652671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.652694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.657812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.657888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.657910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.662959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.663032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.663054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.668213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.668310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.673351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.409 [2024-04-26 12:23:08.673423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.409 [2024-04-26 12:23:08.673446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.409 [2024-04-26 12:23:08.678536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.678612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.678634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.683725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.683798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.683821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.688911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.689010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.694056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.694136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.694160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.699196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.699271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.699294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.704424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.704499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.704531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.709590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.709666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.709687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.714737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.714817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.714839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.719943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.720017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.720039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.725089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.725163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.725199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.730209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.730285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.730308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.735337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.735409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.735432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.740530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.740605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.740628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.745666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.745746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.745775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.750821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.750896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.750919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.755960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.756032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.756055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.761119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.761211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.761234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.766288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.766363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.766385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.771450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.771542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.771565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.776564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.776644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.776667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.781699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.781772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.781795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.786841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.786917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.786939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.410 [2024-04-26 12:23:08.792016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.410 [2024-04-26 12:23:08.792093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.410 [2024-04-26 12:23:08.792116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.797187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.797259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.797281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.802342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.802414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.802437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.807510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.807589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.807611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.812681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.812758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.812780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.817785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.817858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.817880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.822914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.822986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.823009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.828062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.828134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.828156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.833189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.833258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.833281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.838334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.838407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.838429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.843500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.843571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.843594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.848661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.848737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.848759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.853797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.411 [2024-04-26 12:23:08.853871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.411 [2024-04-26 12:23:08.853900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.411 [2024-04-26 12:23:08.858924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.858995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.859018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.864091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.864160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.864197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.869215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.869289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.869312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.874366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.874436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.874458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.879598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.879668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.879691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.884710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.884786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.884808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.889878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.889946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.889970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.895022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.895094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.895118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.900262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.900336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.900358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.905383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.905456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.905479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.910541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.910617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.910639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.915692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.915764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.915787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.684 [2024-04-26 12:23:08.920882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.684 [2024-04-26 12:23:08.920993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.684 [2024-04-26 12:23:08.921028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.926081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.926154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.926204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.931346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.931432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.931469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.936539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.936615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.936639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.941733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.941817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.941841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.946913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.946983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.947006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.952184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.952257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.952280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.957444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.957519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.957542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.962645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.962718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.962740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.967827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.967904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.967925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.973035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.973107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.973129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.978247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.978324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.978346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.983475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.983550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.983574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.988691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.988769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.988793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.993912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.993988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.994011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:08.999081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:08.999154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:08.999191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.004306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.004382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.004404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.009452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.009525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.009546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.014577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.014654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.014676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.019772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.019848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.019870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.025002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.025084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.025106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.030142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.030237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.030259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.035333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.035414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.035437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.040566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.040643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.040666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.045715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.045791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.045812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.050937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.051016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.051043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.056157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.056242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.056265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.685 [2024-04-26 12:23:09.061334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.685 [2024-04-26 12:23:09.061406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.685 [2024-04-26 12:23:09.061429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.066504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.066574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.066596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.071655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.071736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.071758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.076841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.076913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.076935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.081953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.082049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.087116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.087219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.087242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.092288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.092362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.092385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.097452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.097523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.102637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.102710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.107787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.107861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.107885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.112902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.112981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.113003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.118001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.118076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.118098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.123334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.123412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.123435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.128477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.128549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.128571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.133638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.133733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.686 [2024-04-26 12:23:09.138816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.686 [2024-04-26 12:23:09.138893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.686 [2024-04-26 12:23:09.138914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.143994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.144066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.144088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.149133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.149219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.149241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.154489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.154578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.154601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.159714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.159790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.159814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.165046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.165118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.165142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.170186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.170280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.175387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.175455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.180573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.180648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.180671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.185772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.185871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.190923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.190996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.191019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.196162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.196242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.196265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.201331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.201404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.201428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.206496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.206567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.206589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.211672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.211747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.211769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.216926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.217003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.217025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.957 [2024-04-26 12:23:09.222081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.957 [2024-04-26 12:23:09.222152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-04-26 12:23:09.222188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.227269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.227343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.227367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.232455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.232527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.232550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.237589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.237662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.237684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.242719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.242790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.242811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.247904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.247982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.248004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.253087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.253163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.253198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.258218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.258296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.258319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.263344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.263419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.263441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.268465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.268535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.268558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.273640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.273714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.273736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.278752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.278827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.278850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.283934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.284005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.284028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.289051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.289121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.289143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.294240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.294316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.294338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.299366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.299438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.299474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.304513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.304589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.304611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.309669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.309740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.309763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.314752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.314825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.314848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.319900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.319977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.320000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.325077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.325149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.325185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.330291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.330369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.330392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.335498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.335574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.335597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.340653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.340727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.340749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.345787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.345862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.345885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.350893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.350967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.350989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.356080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.356155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.356191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.361242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.958 [2024-04-26 12:23:09.361315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-04-26 12:23:09.361337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.958 [2024-04-26 12:23:09.366359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.366429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.366451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.371503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.371576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.371598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.376642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.376715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.376738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.381779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.381855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.381877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.386931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.387002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.387023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.392077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.392148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.392183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.397208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.397283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.397304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.402306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.402380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.402402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.407494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.407564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.407586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.959 [2024-04-26 12:23:09.412626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:15.959 [2024-04-26 12:23:09.412699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.959 [2024-04-26 12:23:09.412721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.417770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.417846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.417868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.422899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.422973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.422995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.428055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.428129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.428151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.433217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.433294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.433316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.438395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.438478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.438501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.443589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.443662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.443685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.448762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.448835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.448858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.453922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.453997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.454020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.459102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.459186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.459210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.464323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.464396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.464418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.469465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.469536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.469557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.474612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.474693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.474715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.479755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.479850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.226 [2024-04-26 12:23:09.484953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.226 [2024-04-26 12:23:09.485024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.226 [2024-04-26 12:23:09.485046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.490069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.490141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.490182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.495253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.495329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.495351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.500386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.500460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.500482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.505539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.505614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.505635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.510699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.510772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.510793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.515856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.515932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.515953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.521034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.521107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.521128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.526150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.526238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.526260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.531327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.531397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.531419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.536469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.536539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.536562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.541616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.541689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.541711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.546779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.546858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.546879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.551915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.551987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.552008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.557010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.557087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.557108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.562182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.562253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.562276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.567343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.567413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.567435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.572506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.572582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.572604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.577688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.577763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.577784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.582886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.582972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.582996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.588083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.588161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.588198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.593283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.593384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.598444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.598521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.598545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.603597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.603676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.603701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.608796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.608900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.613958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.614037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.614060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.619156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.227 [2024-04-26 12:23:09.619245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.227 [2024-04-26 12:23:09.619268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.227 [2024-04-26 12:23:09.624350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.624433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.624456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.629522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.629599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.629621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.634710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.634785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.634808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.639851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.639928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.639951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.645033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.645105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.645129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.650219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.650295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.650317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.655425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.655505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.655527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.660625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.660696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.660717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.665846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.665916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.665938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.671044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.671123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.671145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.676297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.676370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.676393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.228 [2024-04-26 12:23:09.681544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.228 [2024-04-26 12:23:09.681632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.228 [2024-04-26 12:23:09.681654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.686768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.686844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.686868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.692050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.692130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.692152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.697307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.697384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.697406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.702482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.702553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.702576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.707640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.707717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.707739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.712812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.712888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.712910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.718018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.718094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.718116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.723210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.723285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.723307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.728390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.728465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.728487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.733512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.733588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.733610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.738661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.738734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.738757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.743861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.496 [2024-04-26 12:23:09.743938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.496 [2024-04-26 12:23:09.743961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.496 [2024-04-26 12:23:09.749049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.749118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.749140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.754233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.754307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.754329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.759395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.759474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.759497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.764533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.764603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.764624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.769726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.769797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.769819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.774910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.774979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.775000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.780086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.780189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.785268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.785336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.785357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.790456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.790528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.790549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.795668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.795741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.795762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.800854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.800924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.800947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.806053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.806124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.806145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.811303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.811372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.811394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.816515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.816589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.821698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.821769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.821790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.826884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.826962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.826985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.832160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.832249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.832271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.837375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.837454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.837476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.842624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.842696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.842718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.847785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.847858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.847882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.852961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.853031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.853053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.858143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.858232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.858254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.863346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.863419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.863440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.868547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.868616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.868637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.873732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.873802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.873823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.878933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.879009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.879030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.884079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.497 [2024-04-26 12:23:09.884147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.497 [2024-04-26 12:23:09.884183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.497 [2024-04-26 12:23:09.889275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.889345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.889367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.894411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.894485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.894506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.899614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.899683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.899705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.904801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.904876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.904898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.909981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.910053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.910074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.915121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.915208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.915230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.920330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.920403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.920427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.925499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.925577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.925600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.930729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.930812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.930836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.935962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.936044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.936068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.941192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.941266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.941288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.946339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.946418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.946440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.498 [2024-04-26 12:23:09.951569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.498 [2024-04-26 12:23:09.951648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.498 [2024-04-26 12:23:09.951669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.766 [2024-04-26 12:23:09.956733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.766 [2024-04-26 12:23:09.956808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.766 [2024-04-26 12:23:09.956830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.766 [2024-04-26 12:23:09.961902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.766 [2024-04-26 12:23:09.961973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.766 [2024-04-26 12:23:09.961995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.766 [2024-04-26 12:23:09.967078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.766 [2024-04-26 12:23:09.967151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.766 [2024-04-26 12:23:09.967185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.766 [2024-04-26 12:23:09.972263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.766 [2024-04-26 12:23:09.972338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.766 [2024-04-26 12:23:09.972360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.766 [2024-04-26 12:23:09.977498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:09.977572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:09.977595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:09.982718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:09.982793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:09.982815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:09.987896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:09.987966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:09.987988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:09.993061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:09.993133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:09.993165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:09.998279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:09.998354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:09.998377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.003453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.003535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.003558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.008624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.008699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.008721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.013806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.013881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.013903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.018991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.019062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.019085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.024239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.024315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.024337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.029428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.029501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.029523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.034585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.034661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.034683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.039821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.039903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.039926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.045040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.045133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.050302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.050383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.050421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.055472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.055541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.055566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.060651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.060725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.060748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.065855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.065934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.065957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.071054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.071129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.071151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.076304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.076379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.076401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.081533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.081608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.081631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.086800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.086876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.086908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.092112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.092206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.092243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.097386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.097474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.097501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.102694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.102788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.102815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.107955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.108053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.108078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.113222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.767 [2024-04-26 12:23:10.113299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.767 [2024-04-26 12:23:10.113324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.767 [2024-04-26 12:23:10.118471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.118545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.118570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.123701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.123775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.123799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.128925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.129006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.129031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.134106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.134221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.134255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.139309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.139416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.139439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.144548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.144628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.144652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.149757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.149839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.149864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.154941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.155039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.155063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.160233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.160333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.160359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.165497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.165603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.165628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.170693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.170774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.170799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.175982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.176065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.176091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.181278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.181375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.181402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.186627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.186720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.186745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.191853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.191948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.191973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.197047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.197137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.197162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.202314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.202389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.202413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.207576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.207674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.207698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.212824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.212919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.212944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.218016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.218089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.218113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.768 [2024-04-26 12:23:10.223278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:16.768 [2024-04-26 12:23:10.223348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.768 [2024-04-26 12:23:10.223373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.228496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.228576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.228600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.233705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.233785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.233809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.238862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.238931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.238955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.244058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.244149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.244190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.249278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.249362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.249387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.254395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.254475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.254500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.259597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.259671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.259697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.264806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.264898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.264924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.270019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.270091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.270115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.275222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.275314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.280386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.280457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.280483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.285502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.285574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.285600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.290695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.290777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.290803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.034 [2024-04-26 12:23:10.295916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.034 [2024-04-26 12:23:10.295986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.034 [2024-04-26 12:23:10.296010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.301089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.301192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.301216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.306338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.306416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.306440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.311495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.311597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.311620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.316682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.316773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.316797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.321810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.321900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.321925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.326967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.327057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.327081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.332186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.332312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.337369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.337447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.337474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.342587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.342690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.342716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.347755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.347847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.347872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.352956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.353031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.353054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.358131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.358212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.363379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.363453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.363489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.368722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.368811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.368836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.373890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.373960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.378996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.379065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.379089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.384224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.384313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.384336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.389375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.389458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.389482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.394567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.394639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.394663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.399774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.399880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.399906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.404914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.404982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.405006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.410060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.410137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.410162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.415247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.415316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.415340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.420395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.420477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.420500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.425587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.425686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.425709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.430742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.430815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.430839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.435985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.035 [2024-04-26 12:23:10.436061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.035 [2024-04-26 12:23:10.436086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.035 [2024-04-26 12:23:10.441195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.036 [2024-04-26 12:23:10.441267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.036 [2024-04-26 12:23:10.441291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.036 [2024-04-26 12:23:10.446370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2345770) with pdu=0x2000190fef90 00:26:17.036 [2024-04-26 12:23:10.446441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.036 [2024-04-26 12:23:10.446464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.036 00:26:17.036 Latency(us) 00:26:17.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.036 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:17.036 nvme0n1 : 2.00 5951.77 743.97 0.00 0.00 2682.56 2055.45 11319.85 00:26:17.036 =================================================================================================================== 00:26:17.036 Total : 5951.77 743.97 0.00 0.00 2682.56 2055.45 11319.85 00:26:17.036 0 00:26:17.036 12:23:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:17.036 12:23:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:17.036 | .driver_specific 00:26:17.036 | .nvme_error 00:26:17.036 | .status_code 00:26:17.036 | .command_transient_transport_error' 00:26:17.036 12:23:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:17.036 12:23:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:17.302 12:23:10 -- host/digest.sh@71 -- # (( 384 > 0 )) 00:26:17.302 12:23:10 -- host/digest.sh@73 -- # killprocess 76844 00:26:17.302 12:23:10 -- common/autotest_common.sh@936 -- # '[' -z 76844 ']' 00:26:17.302 12:23:10 -- common/autotest_common.sh@940 -- # kill -0 76844 00:26:17.302 12:23:10 -- common/autotest_common.sh@941 -- # uname 00:26:17.302 12:23:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.302 12:23:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76844 00:26:17.572 12:23:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:17.572 12:23:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:17.572 killing process with pid 76844 00:26:17.572 12:23:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76844' 00:26:17.572 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.572 00:26:17.572 Latency(us) 00:26:17.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.572 =================================================================================================================== 00:26:17.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.572 12:23:10 -- common/autotest_common.sh@955 -- # kill 76844 00:26:17.572 12:23:10 -- common/autotest_common.sh@960 -- # wait 76844 00:26:17.842 12:23:11 -- host/digest.sh@116 -- # killprocess 76631 00:26:17.842 12:23:11 -- common/autotest_common.sh@936 -- # '[' -z 76631 ']' 00:26:17.842 12:23:11 -- common/autotest_common.sh@940 -- # kill -0 76631 00:26:17.842 12:23:11 -- common/autotest_common.sh@941 -- # uname 00:26:17.842 12:23:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.842 12:23:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76631 00:26:17.842 12:23:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:17.842 12:23:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:17.842 killing process with pid 76631 00:26:17.842 12:23:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76631' 00:26:17.842 12:23:11 -- common/autotest_common.sh@955 -- # kill 76631 00:26:17.842 12:23:11 -- common/autotest_common.sh@960 -- # wait 76631 00:26:18.113 00:26:18.113 real 0m18.721s 00:26:18.113 user 0m36.390s 00:26:18.113 sys 0m4.700s 00:26:18.113 12:23:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:18.113 12:23:11 -- common/autotest_common.sh@10 -- # set +x 00:26:18.113 ************************************ 00:26:18.113 END TEST nvmf_digest_error 00:26:18.113 ************************************ 00:26:18.113 12:23:11 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:18.113 12:23:11 -- host/digest.sh@150 -- # nvmftestfini 00:26:18.113 12:23:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:18.113 12:23:11 -- nvmf/common.sh@117 -- # sync 00:26:18.113 12:23:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.113 12:23:11 -- nvmf/common.sh@120 -- # set +e 00:26:18.113 12:23:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.113 12:23:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.113 rmmod nvme_tcp 00:26:18.113 rmmod nvme_fabrics 00:26:18.113 rmmod nvme_keyring 00:26:18.113 12:23:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.113 12:23:11 -- nvmf/common.sh@124 -- # set -e 00:26:18.113 12:23:11 -- nvmf/common.sh@125 -- # return 0 00:26:18.113 12:23:11 -- nvmf/common.sh@478 -- # '[' -n 76631 ']' 00:26:18.113 12:23:11 -- nvmf/common.sh@479 -- # killprocess 76631 00:26:18.113 12:23:11 -- common/autotest_common.sh@936 -- # '[' -z 76631 ']' 00:26:18.113 12:23:11 -- common/autotest_common.sh@940 -- # kill -0 76631 00:26:18.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76631) - No such process 00:26:18.113 Process with pid 76631 is not found 00:26:18.113 12:23:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76631 is not found' 00:26:18.113 12:23:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:18.113 12:23:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:18.113 12:23:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:18.113 12:23:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.113 12:23:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.113 12:23:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.113 12:23:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.113 12:23:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.113 12:23:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:18.113 00:26:18.113 real 0m38.767s 00:26:18.113 user 1m14.217s 00:26:18.113 sys 0m9.743s 00:26:18.113 12:23:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:18.113 12:23:11 -- common/autotest_common.sh@10 -- # set +x 00:26:18.113 ************************************ 00:26:18.113 END TEST nvmf_digest 00:26:18.113 ************************************ 00:26:18.113 12:23:11 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:26:18.113 12:23:11 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:26:18.113 12:23:11 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:18.113 12:23:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:18.114 12:23:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:18.114 12:23:11 -- common/autotest_common.sh@10 -- # set +x 00:26:18.374 ************************************ 00:26:18.374 START TEST nvmf_multipath 00:26:18.374 ************************************ 00:26:18.374 12:23:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:18.374 * Looking for test storage... 00:26:18.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:18.374 12:23:11 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.374 12:23:11 -- nvmf/common.sh@7 -- # uname -s 00:26:18.374 12:23:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.374 12:23:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.374 12:23:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.374 12:23:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.374 12:23:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.374 12:23:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.374 12:23:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.374 12:23:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.374 12:23:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.374 12:23:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.374 12:23:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:26:18.374 12:23:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:26:18.374 12:23:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.374 12:23:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.374 12:23:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:18.374 12:23:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.374 12:23:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.374 12:23:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.374 12:23:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.374 12:23:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.374 12:23:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.374 12:23:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.374 12:23:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.374 12:23:11 -- paths/export.sh@5 -- # export PATH 00:26:18.374 12:23:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.374 12:23:11 -- nvmf/common.sh@47 -- # : 0 00:26:18.374 12:23:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.374 12:23:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.374 12:23:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.374 12:23:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.374 12:23:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.374 12:23:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.374 12:23:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.374 12:23:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.374 12:23:11 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:18.374 12:23:11 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:18.374 12:23:11 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:18.374 12:23:11 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:18.374 12:23:11 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:18.374 12:23:11 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:18.374 12:23:11 -- host/multipath.sh@30 -- # nvmftestinit 00:26:18.374 12:23:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:18.374 12:23:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.374 12:23:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:18.374 12:23:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:18.374 12:23:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:18.374 12:23:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.374 12:23:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.374 12:23:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.374 12:23:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:18.374 12:23:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:18.374 12:23:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:18.374 12:23:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:18.374 12:23:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:18.374 12:23:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:18.374 12:23:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.374 12:23:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.374 12:23:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:18.374 12:23:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:18.374 12:23:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:18.374 12:23:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:18.374 12:23:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:18.374 12:23:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.374 12:23:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:18.374 12:23:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:18.374 12:23:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:18.374 12:23:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:18.374 12:23:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:18.374 12:23:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:18.374 Cannot find device "nvmf_tgt_br" 00:26:18.374 12:23:11 -- nvmf/common.sh@155 -- # true 00:26:18.374 12:23:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.374 Cannot find device "nvmf_tgt_br2" 00:26:18.374 12:23:11 -- nvmf/common.sh@156 -- # true 00:26:18.374 12:23:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:18.374 12:23:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:18.374 Cannot find device "nvmf_tgt_br" 00:26:18.374 12:23:11 -- nvmf/common.sh@158 -- # true 00:26:18.374 12:23:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:18.374 Cannot find device "nvmf_tgt_br2" 00:26:18.374 12:23:11 -- nvmf/common.sh@159 -- # true 00:26:18.374 12:23:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:18.374 12:23:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:18.636 12:23:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:18.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.636 12:23:11 -- nvmf/common.sh@162 -- # true 00:26:18.636 12:23:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:18.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.636 12:23:11 -- nvmf/common.sh@163 -- # true 00:26:18.636 12:23:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:18.636 12:23:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:18.636 12:23:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:18.636 12:23:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:18.636 12:23:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:18.636 12:23:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:18.636 12:23:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:18.636 12:23:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:18.636 12:23:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:18.636 12:23:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:18.636 12:23:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:18.636 12:23:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:18.636 12:23:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:18.636 12:23:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:18.636 12:23:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:18.636 12:23:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:18.636 12:23:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:18.636 12:23:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:18.636 12:23:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:18.636 12:23:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:18.636 12:23:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:18.636 12:23:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:18.636 12:23:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:18.636 12:23:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:18.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:26:18.636 00:26:18.636 --- 10.0.0.2 ping statistics --- 00:26:18.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.636 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:18.636 12:23:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:18.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:18.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:26:18.636 00:26:18.636 --- 10.0.0.3 ping statistics --- 00:26:18.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.636 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:18.636 12:23:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:18.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:26:18.636 00:26:18.636 --- 10.0.0.1 ping statistics --- 00:26:18.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.636 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:18.636 12:23:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.636 12:23:12 -- nvmf/common.sh@422 -- # return 0 00:26:18.636 12:23:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:18.636 12:23:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.636 12:23:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:18.636 12:23:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:18.636 12:23:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.636 12:23:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:18.636 12:23:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:18.636 12:23:12 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:26:18.636 12:23:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:18.636 12:23:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:18.636 12:23:12 -- common/autotest_common.sh@10 -- # set +x 00:26:18.636 12:23:12 -- nvmf/common.sh@470 -- # nvmfpid=77110 00:26:18.636 12:23:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:18.636 12:23:12 -- nvmf/common.sh@471 -- # waitforlisten 77110 00:26:18.636 12:23:12 -- common/autotest_common.sh@817 -- # '[' -z 77110 ']' 00:26:18.636 12:23:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.636 12:23:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:18.636 12:23:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.636 12:23:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:18.636 12:23:12 -- common/autotest_common.sh@10 -- # set +x 00:26:18.894 [2024-04-26 12:23:12.152066] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:26:18.894 [2024-04-26 12:23:12.152193] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.894 [2024-04-26 12:23:12.296546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:19.151 [2024-04-26 12:23:12.439383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.151 [2024-04-26 12:23:12.439695] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.151 [2024-04-26 12:23:12.439804] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.151 [2024-04-26 12:23:12.439900] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.151 [2024-04-26 12:23:12.439984] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.151 [2024-04-26 12:23:12.440203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.151 [2024-04-26 12:23:12.440210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.716 12:23:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:19.716 12:23:13 -- common/autotest_common.sh@850 -- # return 0 00:26:19.716 12:23:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:19.716 12:23:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:19.716 12:23:13 -- common/autotest_common.sh@10 -- # set +x 00:26:19.716 12:23:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.716 12:23:13 -- host/multipath.sh@33 -- # nvmfapp_pid=77110 00:26:19.716 12:23:13 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:19.973 [2024-04-26 12:23:13.367698] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.973 12:23:13 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:20.539 Malloc0 00:26:20.539 12:23:13 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:20.539 12:23:13 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.105 12:23:14 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.105 [2024-04-26 12:23:14.530337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.105 12:23:14 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:21.363 [2024-04-26 12:23:14.758478] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:21.363 12:23:14 -- host/multipath.sh@44 -- # bdevperf_pid=77170 00:26:21.363 12:23:14 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:21.363 12:23:14 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:21.363 12:23:14 -- host/multipath.sh@47 -- # waitforlisten 77170 /var/tmp/bdevperf.sock 00:26:21.363 12:23:14 -- common/autotest_common.sh@817 -- # '[' -z 77170 ']' 00:26:21.363 12:23:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:21.363 12:23:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:21.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:21.363 12:23:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:21.363 12:23:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:21.363 12:23:14 -- common/autotest_common.sh@10 -- # set +x 00:26:22.296 12:23:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:22.296 12:23:15 -- common/autotest_common.sh@850 -- # return 0 00:26:22.296 12:23:15 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:22.554 12:23:16 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:23.119 Nvme0n1 00:26:23.119 12:23:16 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:23.378 Nvme0n1 00:26:23.378 12:23:16 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:23.378 12:23:16 -- host/multipath.sh@78 -- # sleep 1 00:26:24.311 12:23:17 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:26:24.311 12:23:17 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.569 12:23:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.827 12:23:18 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:26:24.827 12:23:18 -- host/multipath.sh@65 -- # dtrace_pid=77211 00:26:24.827 12:23:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:24.827 12:23:18 -- host/multipath.sh@66 -- # sleep 6 00:26:31.400 12:23:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:31.400 12:23:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:31.400 12:23:24 -- host/multipath.sh@67 -- # active_port=4421 00:26:31.400 12:23:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:31.400 Attaching 4 probes... 00:26:31.400 @path[10.0.0.2, 4421]: 16894 00:26:31.400 @path[10.0.0.2, 4421]: 17112 00:26:31.400 @path[10.0.0.2, 4421]: 17077 00:26:31.400 @path[10.0.0.2, 4421]: 17107 00:26:31.400 @path[10.0.0.2, 4421]: 17114 00:26:31.400 12:23:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:31.400 12:23:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:31.400 12:23:24 -- host/multipath.sh@69 -- # sed -n 1p 00:26:31.400 12:23:24 -- host/multipath.sh@69 -- # port=4421 00:26:31.400 12:23:24 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:31.400 12:23:24 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:31.400 12:23:24 -- host/multipath.sh@72 -- # kill 77211 00:26:31.400 12:23:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:31.400 12:23:24 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:26:31.400 12:23:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:31.400 12:23:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:31.968 12:23:25 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:26:31.968 12:23:25 -- host/multipath.sh@65 -- # dtrace_pid=77329 00:26:31.968 12:23:25 -- host/multipath.sh@66 -- # sleep 6 00:26:31.968 12:23:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:38.554 12:23:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:38.554 12:23:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:38.554 12:23:31 -- host/multipath.sh@67 -- # active_port=4420 00:26:38.554 12:23:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:38.554 Attaching 4 probes... 00:26:38.554 @path[10.0.0.2, 4420]: 16896 00:26:38.554 @path[10.0.0.2, 4420]: 17193 00:26:38.554 @path[10.0.0.2, 4420]: 17096 00:26:38.554 @path[10.0.0.2, 4420]: 17094 00:26:38.554 @path[10.0.0.2, 4420]: 16959 00:26:38.554 12:23:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:38.554 12:23:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:38.554 12:23:31 -- host/multipath.sh@69 -- # sed -n 1p 00:26:38.554 12:23:31 -- host/multipath.sh@69 -- # port=4420 00:26:38.554 12:23:31 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:38.554 12:23:31 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:38.554 12:23:31 -- host/multipath.sh@72 -- # kill 77329 00:26:38.554 12:23:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:38.554 12:23:31 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:26:38.554 12:23:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:38.554 12:23:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.554 12:23:31 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:26:38.554 12:23:31 -- host/multipath.sh@65 -- # dtrace_pid=77447 00:26:38.554 12:23:31 -- host/multipath.sh@66 -- # sleep 6 00:26:38.554 12:23:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:45.114 12:23:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:45.114 12:23:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:45.114 12:23:38 -- host/multipath.sh@67 -- # active_port=4421 00:26:45.114 12:23:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:45.114 Attaching 4 probes... 00:26:45.114 @path[10.0.0.2, 4421]: 12543 00:26:45.114 @path[10.0.0.2, 4421]: 16767 00:26:45.114 @path[10.0.0.2, 4421]: 16767 00:26:45.115 @path[10.0.0.2, 4421]: 16827 00:26:45.115 @path[10.0.0.2, 4421]: 16816 00:26:45.115 12:23:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:45.115 12:23:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:45.115 12:23:38 -- host/multipath.sh@69 -- # sed -n 1p 00:26:45.115 12:23:38 -- host/multipath.sh@69 -- # port=4421 00:26:45.115 12:23:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:45.115 12:23:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:45.115 12:23:38 -- host/multipath.sh@72 -- # kill 77447 00:26:45.115 12:23:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:45.115 12:23:38 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:26:45.115 12:23:38 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:45.115 12:23:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:45.373 12:23:38 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:26:45.373 12:23:38 -- host/multipath.sh@65 -- # dtrace_pid=77565 00:26:45.373 12:23:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:45.373 12:23:38 -- host/multipath.sh@66 -- # sleep 6 00:26:51.951 12:23:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:26:51.951 12:23:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:51.951 12:23:45 -- host/multipath.sh@67 -- # active_port= 00:26:51.951 12:23:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:51.951 Attaching 4 probes... 00:26:51.951 00:26:51.951 00:26:51.951 00:26:51.951 00:26:51.951 00:26:51.951 12:23:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:51.951 12:23:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:51.951 12:23:45 -- host/multipath.sh@69 -- # sed -n 1p 00:26:51.951 12:23:45 -- host/multipath.sh@69 -- # port= 00:26:51.951 12:23:45 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:26:51.951 12:23:45 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:26:51.951 12:23:45 -- host/multipath.sh@72 -- # kill 77565 00:26:51.951 12:23:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:51.951 12:23:45 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:26:51.951 12:23:45 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.951 12:23:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:52.210 12:23:45 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:26:52.210 12:23:45 -- host/multipath.sh@65 -- # dtrace_pid=77677 00:26:52.210 12:23:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:52.210 12:23:45 -- host/multipath.sh@66 -- # sleep 6 00:26:58.815 12:23:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:58.815 12:23:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:58.815 12:23:51 -- host/multipath.sh@67 -- # active_port=4421 00:26:58.815 12:23:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:58.815 Attaching 4 probes... 00:26:58.815 @path[10.0.0.2, 4421]: 15118 00:26:58.815 @path[10.0.0.2, 4421]: 15976 00:26:58.815 @path[10.0.0.2, 4421]: 15992 00:26:58.815 @path[10.0.0.2, 4421]: 15968 00:26:58.815 @path[10.0.0.2, 4421]: 16195 00:26:58.815 12:23:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:58.815 12:23:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:58.815 12:23:51 -- host/multipath.sh@69 -- # sed -n 1p 00:26:58.815 12:23:51 -- host/multipath.sh@69 -- # port=4421 00:26:58.815 12:23:51 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:58.815 12:23:51 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:58.815 12:23:51 -- host/multipath.sh@72 -- # kill 77677 00:26:58.815 12:23:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:58.815 12:23:51 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.815 [2024-04-26 12:23:52.184826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.184995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 [2024-04-26 12:23:52.185088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074cc0 is same with the state(5) to be set 00:26:58.815 12:23:52 -- host/multipath.sh@101 -- # sleep 1 00:26:59.754 12:23:53 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:59.754 12:23:53 -- host/multipath.sh@65 -- # dtrace_pid=77801 00:26:59.754 12:23:53 -- host/multipath.sh@66 -- # sleep 6 00:26:59.754 12:23:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:06.351 12:23:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:06.351 12:23:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:06.351 12:23:59 -- host/multipath.sh@67 -- # active_port=4420 00:27:06.351 12:23:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:06.351 Attaching 4 probes... 00:27:06.351 @path[10.0.0.2, 4420]: 15398 00:27:06.351 @path[10.0.0.2, 4420]: 15602 00:27:06.351 @path[10.0.0.2, 4420]: 15662 00:27:06.351 @path[10.0.0.2, 4420]: 15739 00:27:06.351 @path[10.0.0.2, 4420]: 15741 00:27:06.351 12:23:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:06.351 12:23:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:06.351 12:23:59 -- host/multipath.sh@69 -- # sed -n 1p 00:27:06.351 12:23:59 -- host/multipath.sh@69 -- # port=4420 00:27:06.351 12:23:59 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:06.351 12:23:59 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:06.351 12:23:59 -- host/multipath.sh@72 -- # kill 77801 00:27:06.351 12:23:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:06.351 12:23:59 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:06.351 [2024-04-26 12:23:59.775357] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:06.351 12:23:59 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:06.918 12:24:00 -- host/multipath.sh@111 -- # sleep 6 00:27:13.477 12:24:06 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:27:13.477 12:24:06 -- host/multipath.sh@65 -- # dtrace_pid=77975 00:27:13.477 12:24:06 -- host/multipath.sh@66 -- # sleep 6 00:27:13.477 12:24:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77110 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:18.737 12:24:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:18.737 12:24:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:18.994 12:24:12 -- host/multipath.sh@67 -- # active_port=4421 00:27:18.995 12:24:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:18.995 Attaching 4 probes... 00:27:18.995 @path[10.0.0.2, 4421]: 16352 00:27:18.995 @path[10.0.0.2, 4421]: 16726 00:27:18.995 @path[10.0.0.2, 4421]: 16638 00:27:18.995 @path[10.0.0.2, 4421]: 16570 00:27:18.995 @path[10.0.0.2, 4421]: 16656 00:27:18.995 12:24:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:18.995 12:24:12 -- host/multipath.sh@69 -- # sed -n 1p 00:27:18.995 12:24:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:18.995 12:24:12 -- host/multipath.sh@69 -- # port=4421 00:27:18.995 12:24:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:18.995 12:24:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:18.995 12:24:12 -- host/multipath.sh@72 -- # kill 77975 00:27:18.995 12:24:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:18.995 12:24:12 -- host/multipath.sh@114 -- # killprocess 77170 00:27:18.995 12:24:12 -- common/autotest_common.sh@936 -- # '[' -z 77170 ']' 00:27:18.995 12:24:12 -- common/autotest_common.sh@940 -- # kill -0 77170 00:27:18.995 12:24:12 -- common/autotest_common.sh@941 -- # uname 00:27:18.995 12:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:18.995 12:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77170 00:27:18.995 killing process with pid 77170 00:27:18.995 12:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:18.995 12:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:18.995 12:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77170' 00:27:18.995 12:24:12 -- common/autotest_common.sh@955 -- # kill 77170 00:27:18.995 12:24:12 -- common/autotest_common.sh@960 -- # wait 77170 00:27:19.262 Connection closed with partial response: 00:27:19.262 00:27:19.262 00:27:19.262 12:24:12 -- host/multipath.sh@116 -- # wait 77170 00:27:19.262 12:24:12 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:19.262 [2024-04-26 12:23:14.848217] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:27:19.262 [2024-04-26 12:23:14.848405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77170 ] 00:27:19.262 [2024-04-26 12:23:14.997821] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.262 [2024-04-26 12:23:15.116506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.262 Running I/O for 90 seconds... 00:27:19.262 [2024-04-26 12:23:25.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.262 [2024-04-26 12:23:25.117649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.262 [2024-04-26 12:23:25.117665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.117974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.117988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.118305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.118982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.118997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.263 [2024-04-26 12:23:25.119034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.119070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.119150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.119206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.119244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.263 [2024-04-26 12:23:25.119265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-04-26 12:23:25.119280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.119318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.119354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.119981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.119997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-04-26 12:23:25.120461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.264 [2024-04-26 12:23:25.120793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.264 [2024-04-26 12:23:25.120809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.120831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.120846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.120867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.120882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.120903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.120918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.120940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.120955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.120976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.120992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.121028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.121076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.121114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.121151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.121411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.121427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.122888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-04-26 12:23:25.122919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.122948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.122966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.122988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:25.123827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:25.123842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:31.659862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:31.659958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.265 [2024-04-26 12:23:31.660031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.265 [2024-04-26 12:23:31.660057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.660598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.660977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.660997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.266 [2024-04-26 12:23:31.661567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.266 [2024-04-26 12:23:31.661860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.266 [2024-04-26 12:23:31.661879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.661905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.661949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.661967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.661993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.662811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.662855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.662900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.662945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.662970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.662988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.663034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.663078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.267 [2024-04-26 12:23:31.663185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.663237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.663374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.663427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.663479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.267 [2024-04-26 12:23:31.663542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.267 [2024-04-26 12:23:31.663574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.663965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.663994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.664014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.664065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.664109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.664154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.664216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.664974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.664993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.665019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.665038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.665063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.665082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.665117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.665138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.666032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.666064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.666102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.666124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.666157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.666194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.666239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.268 [2024-04-26 12:23:31.666259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.666291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.268 [2024-04-26 12:23:31.666311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.268 [2024-04-26 12:23:31.666344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:31.666838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.666890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.666942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.666975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.666994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.667026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.667046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.667078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.667097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.667130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.667149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.667198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.667222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:31.667256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:31.667276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.762971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.762996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.763014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.269 [2024-04-26 12:23:38.763405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.269 [2024-04-26 12:23:38.763450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:19.269 [2024-04-26 12:23:38.763477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.763971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.763997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.270 [2024-04-26 12:23:38.764722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.270 [2024-04-26 12:23:38.765370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.270 [2024-04-26 12:23:38.765389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.765434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.765562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.765608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.765964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.765990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.271 [2024-04-26 12:23:38.766859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.766904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.271 [2024-04-26 12:23:38.766929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.271 [2024-04-26 12:23:38.766948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.766974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.766993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.767794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.767814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.768696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.768729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.768770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.768791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.768825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.768845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.768877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.272 [2024-04-26 12:23:38.768897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.768929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.768948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.768981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:38.769511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:38.769530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.185166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:52.185698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.185813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:52.185859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.185885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:52.185906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.185928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:52.185946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.185967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:52.185986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.186006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.272 [2024-04-26 12:23:52.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.272 [2024-04-26 12:23:52.186046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.186687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.186979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.186994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.273 [2024-04-26 12:23:52.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.187265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.187294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.187323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.187352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.187381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.273 [2024-04-26 12:23:52.187411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.273 [2024-04-26 12:23:52.187437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.187943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.187972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.187987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.274 [2024-04-26 12:23:52.188198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.274 [2024-04-26 12:23:52.188656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.274 [2024-04-26 12:23:52.188671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.188684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.188975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.188991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.275 [2024-04-26 12:23:52.189669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.275 [2024-04-26 12:23:52.189761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.275 [2024-04-26 12:23:52.189776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.276 [2024-04-26 12:23:52.189789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.276 [2024-04-26 12:23:52.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.276 [2024-04-26 12:23:52.189821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.276 [2024-04-26 12:23:52.189837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.276 [2024-04-26 12:23:52.189850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.276 [2024-04-26 12:23:52.189866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.276 [2024-04-26 12:23:52.189879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.276 [2024-04-26 12:23:52.189894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd80fc0 is same with the state(5) to be set 00:27:19.276 [2024-04-26 12:23:52.189911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:19.276 [2024-04-26 12:23:52.189921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:19.276 [2024-04-26 12:23:52.189932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110120 len:8 PRP1 0x0 PRP2 0x0 00:27:19.276 [2024-04-26 12:23:52.189945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.276 [2024-04-26 12:23:52.190013] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd80fc0 was disconnected and freed. reset controller. 00:27:19.276 [2024-04-26 12:23:52.191279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:19.276 [2024-04-26 12:23:52.191394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8ea40 (9): Bad file descriptor 00:27:19.276 [2024-04-26 12:23:52.191926] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.276 [2024-04-26 12:23:52.192011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.276 [2024-04-26 12:23:52.192066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.276 [2024-04-26 12:23:52.192094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8ea40 with addr=10.0.0.2, port=4421 00:27:19.276 [2024-04-26 12:23:52.192112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ea40 is same with the state(5) to be set 00:27:19.276 [2024-04-26 12:23:52.192146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8ea40 (9): Bad file descriptor 00:27:19.276 [2024-04-26 12:23:52.192197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:19.276 [2024-04-26 12:23:52.192218] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:19.276 [2024-04-26 12:23:52.192233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:19.276 [2024-04-26 12:23:52.192280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:19.276 [2024-04-26 12:23:52.192299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:19.276 [2024-04-26 12:24:02.258794] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:19.276 Received shutdown signal, test time was about 55.552285 seconds 00:27:19.276 00:27:19.276 Latency(us) 00:27:19.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.276 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:19.276 Verification LBA range: start 0x0 length 0x4000 00:27:19.276 Nvme0n1 : 55.55 7015.33 27.40 0.00 0.00 18217.77 1325.61 7046430.72 00:27:19.276 =================================================================================================================== 00:27:19.276 Total : 7015.33 27.40 0.00 0.00 18217.77 1325.61 7046430.72 00:27:19.276 12:24:12 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.534 12:24:12 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:27:19.534 12:24:12 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:19.534 12:24:12 -- host/multipath.sh@125 -- # nvmftestfini 00:27:19.534 12:24:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:19.534 12:24:12 -- nvmf/common.sh@117 -- # sync 00:27:19.534 12:24:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.534 12:24:12 -- nvmf/common.sh@120 -- # set +e 00:27:19.534 12:24:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.534 12:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.534 rmmod nvme_tcp 00:27:19.534 rmmod nvme_fabrics 00:27:19.534 rmmod nvme_keyring 00:27:19.534 12:24:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.791 12:24:13 -- nvmf/common.sh@124 -- # set -e 00:27:19.791 12:24:13 -- nvmf/common.sh@125 -- # return 0 00:27:19.791 12:24:13 -- nvmf/common.sh@478 -- # '[' -n 77110 ']' 00:27:19.791 12:24:13 -- nvmf/common.sh@479 -- # killprocess 77110 00:27:19.791 12:24:13 -- common/autotest_common.sh@936 -- # '[' -z 77110 ']' 00:27:19.791 12:24:13 -- common/autotest_common.sh@940 -- # kill -0 77110 00:27:19.792 12:24:13 -- common/autotest_common.sh@941 -- # uname 00:27:19.792 12:24:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:19.792 12:24:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77110 00:27:19.792 12:24:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:19.792 12:24:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:19.792 killing process with pid 77110 00:27:19.792 12:24:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77110' 00:27:19.792 12:24:13 -- common/autotest_common.sh@955 -- # kill 77110 00:27:19.792 12:24:13 -- common/autotest_common.sh@960 -- # wait 77110 00:27:20.048 12:24:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:20.048 12:24:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:20.048 12:24:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:20.048 12:24:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.048 12:24:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.048 12:24:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.048 12:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.048 12:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.048 12:24:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:20.048 00:27:20.048 real 1m1.761s 00:27:20.048 user 2m50.340s 00:27:20.049 sys 0m19.169s 00:27:20.049 12:24:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:20.049 ************************************ 00:27:20.049 END TEST nvmf_multipath 00:27:20.049 ************************************ 00:27:20.049 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:27:20.049 12:24:13 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:20.049 12:24:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:20.049 12:24:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:20.049 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:27:20.049 ************************************ 00:27:20.049 START TEST nvmf_timeout 00:27:20.049 ************************************ 00:27:20.049 12:24:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:20.307 * Looking for test storage... 00:27:20.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:20.307 12:24:13 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.307 12:24:13 -- nvmf/common.sh@7 -- # uname -s 00:27:20.307 12:24:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.307 12:24:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.307 12:24:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.307 12:24:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.307 12:24:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.307 12:24:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.307 12:24:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.307 12:24:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.307 12:24:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.307 12:24:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.307 12:24:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:27:20.307 12:24:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:27:20.307 12:24:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.307 12:24:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.307 12:24:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.307 12:24:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.307 12:24:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.307 12:24:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.307 12:24:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.307 12:24:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.307 12:24:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.307 12:24:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.307 12:24:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.307 12:24:13 -- paths/export.sh@5 -- # export PATH 00:27:20.307 12:24:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.307 12:24:13 -- nvmf/common.sh@47 -- # : 0 00:27:20.307 12:24:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.307 12:24:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.307 12:24:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.307 12:24:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.307 12:24:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.307 12:24:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.307 12:24:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.307 12:24:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.307 12:24:13 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:20.307 12:24:13 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:20.307 12:24:13 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:20.307 12:24:13 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:20.307 12:24:13 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:20.307 12:24:13 -- host/timeout.sh@19 -- # nvmftestinit 00:27:20.307 12:24:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:20.307 12:24:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.307 12:24:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:20.307 12:24:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:20.307 12:24:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:20.307 12:24:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.307 12:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.307 12:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.307 12:24:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:20.307 12:24:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:20.307 12:24:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:20.307 12:24:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:20.307 12:24:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:20.307 12:24:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:20.307 12:24:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.307 12:24:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.307 12:24:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:20.307 12:24:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:20.307 12:24:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.307 12:24:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.307 12:24:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.307 12:24:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.307 12:24:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.307 12:24:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.307 12:24:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.307 12:24:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.307 12:24:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:20.307 12:24:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:20.307 Cannot find device "nvmf_tgt_br" 00:27:20.307 12:24:13 -- nvmf/common.sh@155 -- # true 00:27:20.307 12:24:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:20.307 Cannot find device "nvmf_tgt_br2" 00:27:20.307 12:24:13 -- nvmf/common.sh@156 -- # true 00:27:20.307 12:24:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:20.307 12:24:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:20.307 Cannot find device "nvmf_tgt_br" 00:27:20.307 12:24:13 -- nvmf/common.sh@158 -- # true 00:27:20.307 12:24:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:20.307 Cannot find device "nvmf_tgt_br2" 00:27:20.307 12:24:13 -- nvmf/common.sh@159 -- # true 00:27:20.307 12:24:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:20.307 12:24:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:20.307 12:24:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:20.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.307 12:24:13 -- nvmf/common.sh@162 -- # true 00:27:20.307 12:24:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:20.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.307 12:24:13 -- nvmf/common.sh@163 -- # true 00:27:20.307 12:24:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:20.307 12:24:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:20.307 12:24:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:20.307 12:24:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:20.307 12:24:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:20.307 12:24:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:20.566 12:24:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:20.566 12:24:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:20.566 12:24:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:20.566 12:24:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:20.566 12:24:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:20.566 12:24:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:20.566 12:24:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:20.566 12:24:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:20.566 12:24:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:20.566 12:24:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:20.566 12:24:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:20.566 12:24:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:20.566 12:24:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:20.566 12:24:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:20.566 12:24:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:20.566 12:24:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:20.566 12:24:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:20.566 12:24:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:20.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:27:20.566 00:27:20.566 --- 10.0.0.2 ping statistics --- 00:27:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.566 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:20.566 12:24:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:20.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:20.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:27:20.566 00:27:20.566 --- 10.0.0.3 ping statistics --- 00:27:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.566 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:27:20.566 12:24:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:20.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:20.566 00:27:20.566 --- 10.0.0.1 ping statistics --- 00:27:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.566 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:20.566 12:24:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.566 12:24:13 -- nvmf/common.sh@422 -- # return 0 00:27:20.566 12:24:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:20.566 12:24:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.566 12:24:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:20.566 12:24:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:20.566 12:24:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.566 12:24:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:20.566 12:24:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:20.566 12:24:13 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:27:20.566 12:24:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:20.566 12:24:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:20.566 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:27:20.566 12:24:13 -- nvmf/common.sh@470 -- # nvmfpid=78293 00:27:20.566 12:24:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:20.566 12:24:13 -- nvmf/common.sh@471 -- # waitforlisten 78293 00:27:20.566 12:24:13 -- common/autotest_common.sh@817 -- # '[' -z 78293 ']' 00:27:20.566 12:24:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.566 12:24:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:20.566 12:24:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.566 12:24:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:20.566 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:27:20.566 [2024-04-26 12:24:14.003772] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:27:20.566 [2024-04-26 12:24:14.003895] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.824 [2024-04-26 12:24:14.143417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:20.824 [2024-04-26 12:24:14.273847] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.824 [2024-04-26 12:24:14.273925] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.824 [2024-04-26 12:24:14.273940] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.824 [2024-04-26 12:24:14.273950] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.824 [2024-04-26 12:24:14.273960] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.824 [2024-04-26 12:24:14.274135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.824 [2024-04-26 12:24:14.274148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.765 12:24:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:21.765 12:24:15 -- common/autotest_common.sh@850 -- # return 0 00:27:21.765 12:24:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:21.765 12:24:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:21.765 12:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:21.765 12:24:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.765 12:24:15 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:21.765 12:24:15 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:22.023 [2024-04-26 12:24:15.251734] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.023 12:24:15 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:22.281 Malloc0 00:27:22.281 12:24:15 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.539 12:24:15 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.797 12:24:16 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.057 [2024-04-26 12:24:16.390843] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.057 12:24:16 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:23.057 12:24:16 -- host/timeout.sh@32 -- # bdevperf_pid=78349 00:27:23.057 12:24:16 -- host/timeout.sh@34 -- # waitforlisten 78349 /var/tmp/bdevperf.sock 00:27:23.057 12:24:16 -- common/autotest_common.sh@817 -- # '[' -z 78349 ']' 00:27:23.057 12:24:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:23.057 12:24:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:23.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:23.057 12:24:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:23.057 12:24:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:23.057 12:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:23.057 [2024-04-26 12:24:16.451149] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:27:23.057 [2024-04-26 12:24:16.451260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78349 ] 00:27:23.315 [2024-04-26 12:24:16.587022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.315 [2024-04-26 12:24:16.706123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.250 12:24:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:24.250 12:24:17 -- common/autotest_common.sh@850 -- # return 0 00:27:24.250 12:24:17 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:24.250 12:24:17 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:24.508 NVMe0n1 00:27:24.508 12:24:17 -- host/timeout.sh@51 -- # rpc_pid=78367 00:27:24.508 12:24:17 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:24.508 12:24:17 -- host/timeout.sh@53 -- # sleep 1 00:27:24.766 Running I/O for 10 seconds... 00:27:25.700 12:24:18 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.961 [2024-04-26 12:24:19.174464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d5a0 is same with the state(5) to be set 00:27:25.961 [2024-04-26 12:24:19.174769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.961 [2024-04-26 12:24:19.174799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.961 [2024-04-26 12:24:19.174822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.961 [2024-04-26 12:24:19.174834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.961 [2024-04-26 12:24:19.174846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.961 [2024-04-26 12:24:19.174856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.961 [2024-04-26 12:24:19.174867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.961 [2024-04-26 12:24:19.174877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.174888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.174897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.174909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.174919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.175702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.175712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.176657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.176817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.176827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.177195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.962 [2024-04-26 12:24:19.177212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.177224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.177234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.177246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.177337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.177352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.962 [2024-04-26 12:24:19.177364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.962 [2024-04-26 12:24:19.177373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.177393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.177404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.177524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.177544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.177557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.177698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.177837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.177949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.177965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.177975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.178578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.178605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.178717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.178738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.178864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.178888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.178981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.178994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.179015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.963 [2024-04-26 12:24:19.179036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.179897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.179907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.180018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.963 [2024-04-26 12:24:19.180034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.963 [2024-04-26 12:24:19.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.180986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.180996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.181139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.181420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.181556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.181581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.181706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.181732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.181874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.182748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.182770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.183060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.183340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.964 [2024-04-26 12:24:19.183472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.183625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.183751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.183863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.183892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.964 [2024-04-26 12:24:19.183903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.964 [2024-04-26 12:24:19.183913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.184037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.184058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.184071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.184199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.184343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.184366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.184496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.184509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.184646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.184778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.184802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.184946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.185086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.185233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.185341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.185363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.185504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.185603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.185659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.185807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.185948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.186052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.186079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.186099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:25.965 [2024-04-26 12:24:19.186255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.186382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.186403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.186424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.186665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.186687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.186708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.186984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.965 [2024-04-26 12:24:19.187083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.187097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236510 is same with the state(5) to be set 00:27:25.965 [2024-04-26 12:24:19.187113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:25.965 [2024-04-26 12:24:19.187121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:25.965 [2024-04-26 12:24:19.187130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70136 len:8 PRP1 0x0 PRP2 0x0 00:27:25.965 [2024-04-26 12:24:19.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.187378] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1236510 was disconnected and freed. reset controller. 00:27:25.965 [2024-04-26 12:24:19.187673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.965 [2024-04-26 12:24:19.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.965 [2024-04-26 12:24:19.187713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.966 [2024-04-26 12:24:19.187722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.966 [2024-04-26 12:24:19.187732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.966 [2024-04-26 12:24:19.187741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.966 [2024-04-26 12:24:19.187751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.966 [2024-04-26 12:24:19.187759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.966 [2024-04-26 12:24:19.187768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eee20 is same with the state(5) to be set 00:27:25.966 [2024-04-26 12:24:19.188192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.966 [2024-04-26 12:24:19.188222] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eee20 (9): Bad file descriptor 00:27:25.966 [2024-04-26 12:24:19.188330] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-04-26 12:24:19.188599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-04-26 12:24:19.188750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-04-26 12:24:19.188768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eee20 with addr=10.0.0.2, port=4420 00:27:25.966 [2024-04-26 12:24:19.188879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eee20 is same with the state(5) to be set 00:27:25.966 [2024-04-26 12:24:19.189039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eee20 (9): Bad file descriptor 00:27:25.966 [2024-04-26 12:24:19.189154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:25.966 [2024-04-26 12:24:19.189182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:25.966 [2024-04-26 12:24:19.189195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:25.966 [2024-04-26 12:24:19.189217] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:25.966 [2024-04-26 12:24:19.189343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:25.966 12:24:19 -- host/timeout.sh@56 -- # sleep 2 00:27:27.869 [2024-04-26 12:24:21.189612] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.869 [2024-04-26 12:24:21.189735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.869 [2024-04-26 12:24:21.189781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.869 [2024-04-26 12:24:21.189798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eee20 with addr=10.0.0.2, port=4420 00:27:27.869 [2024-04-26 12:24:21.189812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eee20 is same with the state(5) to be set 00:27:27.869 [2024-04-26 12:24:21.189838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eee20 (9): Bad file descriptor 00:27:27.869 [2024-04-26 12:24:21.189859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:27.869 [2024-04-26 12:24:21.189869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:27.869 [2024-04-26 12:24:21.189881] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:27.869 [2024-04-26 12:24:21.189909] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:27.869 [2024-04-26 12:24:21.189921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:27.869 12:24:21 -- host/timeout.sh@57 -- # get_controller 00:27:27.869 12:24:21 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:27.869 12:24:21 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:27:28.128 12:24:21 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:27:28.128 12:24:21 -- host/timeout.sh@58 -- # get_bdev 00:27:28.128 12:24:21 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:27:28.128 12:24:21 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:27:28.387 12:24:21 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:27:28.387 12:24:21 -- host/timeout.sh@61 -- # sleep 5 00:27:29.763 [2024-04-26 12:24:23.190087] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.763 [2024-04-26 12:24:23.190235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.763 [2024-04-26 12:24:23.190282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.763 [2024-04-26 12:24:23.190300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eee20 with addr=10.0.0.2, port=4420 00:27:29.763 [2024-04-26 12:24:23.190315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eee20 is same with the state(5) to be set 00:27:29.763 [2024-04-26 12:24:23.190345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eee20 (9): Bad file descriptor 00:27:29.763 [2024-04-26 12:24:23.190365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.763 [2024-04-26 12:24:23.190375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.763 [2024-04-26 12:24:23.190386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.763 [2024-04-26 12:24:23.190416] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.763 [2024-04-26 12:24:23.190427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.293 [2024-04-26 12:24:25.190479] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.860 00:27:32.860 Latency(us) 00:27:32.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.860 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:32.860 Verification LBA range: start 0x0 length 0x4000 00:27:32.860 NVMe0n1 : 8.15 1064.43 4.16 15.70 0.00 118605.87 3723.64 7046430.72 00:27:32.860 =================================================================================================================== 00:27:32.860 Total : 1064.43 4.16 15.70 0.00 118605.87 3723.64 7046430.72 00:27:32.860 0 00:27:33.427 12:24:26 -- host/timeout.sh@62 -- # get_controller 00:27:33.427 12:24:26 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:33.427 12:24:26 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:27:33.685 12:24:27 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:27:33.685 12:24:27 -- host/timeout.sh@63 -- # get_bdev 00:27:33.685 12:24:27 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:27:33.685 12:24:27 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:27:33.943 12:24:27 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:27:33.943 12:24:27 -- host/timeout.sh@65 -- # wait 78367 00:27:33.944 12:24:27 -- host/timeout.sh@67 -- # killprocess 78349 00:27:33.944 12:24:27 -- common/autotest_common.sh@936 -- # '[' -z 78349 ']' 00:27:33.944 12:24:27 -- common/autotest_common.sh@940 -- # kill -0 78349 00:27:33.944 12:24:27 -- common/autotest_common.sh@941 -- # uname 00:27:33.944 12:24:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:33.944 12:24:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78349 00:27:33.944 killing process with pid 78349 00:27:33.944 Received shutdown signal, test time was about 9.331077 seconds 00:27:33.944 00:27:33.944 Latency(us) 00:27:33.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.944 =================================================================================================================== 00:27:33.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.944 12:24:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:33.944 12:24:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:33.944 12:24:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78349' 00:27:33.944 12:24:27 -- common/autotest_common.sh@955 -- # kill 78349 00:27:33.944 12:24:27 -- common/autotest_common.sh@960 -- # wait 78349 00:27:34.201 12:24:27 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.459 [2024-04-26 12:24:27.835009] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.459 12:24:27 -- host/timeout.sh@74 -- # bdevperf_pid=78492 00:27:34.459 12:24:27 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:34.459 12:24:27 -- host/timeout.sh@76 -- # waitforlisten 78492 /var/tmp/bdevperf.sock 00:27:34.459 12:24:27 -- common/autotest_common.sh@817 -- # '[' -z 78492 ']' 00:27:34.459 12:24:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.459 12:24:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.459 12:24:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.459 12:24:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.459 12:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:34.459 [2024-04-26 12:24:27.903708] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:27:34.459 [2024-04-26 12:24:27.904113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78492 ] 00:27:34.717 [2024-04-26 12:24:28.036392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.717 [2024-04-26 12:24:28.161311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.651 12:24:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:35.651 12:24:28 -- common/autotest_common.sh@850 -- # return 0 00:27:35.651 12:24:28 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:35.651 12:24:29 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:27:35.908 NVMe0n1 00:27:36.194 12:24:29 -- host/timeout.sh@84 -- # rpc_pid=78515 00:27:36.194 12:24:29 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:36.194 12:24:29 -- host/timeout.sh@86 -- # sleep 1 00:27:36.194 Running I/O for 10 seconds... 00:27:37.128 12:24:30 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.386 [2024-04-26 12:24:30.666889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.386 [2024-04-26 12:24:30.666970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.666998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.386 [2024-04-26 12:24:30.667010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.386 [2024-04-26 12:24:30.667034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.386 [2024-04-26 12:24:30.667055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.386 [2024-04-26 12:24:30.667077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.386 [2024-04-26 12:24:30.667098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.386 [2024-04-26 12:24:30.667120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.386 [2024-04-26 12:24:30.667141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.386 [2024-04-26 12:24:30.667162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.386 [2024-04-26 12:24:30.667194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.667434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.667734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.667743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.387 [2024-04-26 12:24:30.668275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.387 [2024-04-26 12:24:30.668419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.387 [2024-04-26 12:24:30.668431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.668970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.668984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.668993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.388 [2024-04-26 12:24:30.669658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.388 [2024-04-26 12:24:30.669691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.388 [2024-04-26 12:24:30.669703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.669984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.669996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.389 [2024-04-26 12:24:30.670187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.389 [2024-04-26 12:24:30.670516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.670527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28630 is same with the state(5) to be set 00:27:37.389 [2024-04-26 12:24:30.670541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:37.389 [2024-04-26 12:24:30.670549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:37.389 [2024-04-26 12:24:30.670557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69080 len:8 PRP1 0x0 PRP2 0x0 00:27:37.389 [2024-04-26 12:24:30.670575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.389 [2024-04-26 12:24:30.671266] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e28630 was disconnected and freed. reset controller. 00:27:37.390 [2024-04-26 12:24:30.671556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.390 [2024-04-26 12:24:30.671712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:37.390 [2024-04-26 12:24:30.671827] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.390 [2024-04-26 12:24:30.671893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.390 [2024-04-26 12:24:30.671935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.390 [2024-04-26 12:24:30.671951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de0e20 with addr=10.0.0.2, port=4420 00:27:37.390 [2024-04-26 12:24:30.671962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0e20 is same with the state(5) to be set 00:27:37.390 [2024-04-26 12:24:30.671981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:37.390 [2024-04-26 12:24:30.671997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.390 [2024-04-26 12:24:30.672006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.390 [2024-04-26 12:24:30.672017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.390 [2024-04-26 12:24:30.672037] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.390 [2024-04-26 12:24:30.672049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.390 12:24:30 -- host/timeout.sh@90 -- # sleep 1 00:27:38.320 [2024-04-26 12:24:31.672231] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-04-26 12:24:31.672365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-04-26 12:24:31.672419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-04-26 12:24:31.672438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de0e20 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-04-26 12:24:31.672454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0e20 is same with the state(5) to be set 00:27:38.320 [2024-04-26 12:24:31.672486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:38.320 [2024-04-26 12:24:31.672531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-04-26 12:24:31.672552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-04-26 12:24:31.672572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-04-26 12:24:31.672613] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-04-26 12:24:31.672629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 12:24:31 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.576 [2024-04-26 12:24:31.957213] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.577 12:24:31 -- host/timeout.sh@92 -- # wait 78515 00:27:39.508 [2024-04-26 12:24:32.692859] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:46.071 00:27:46.071 Latency(us) 00:27:46.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.071 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:46.071 Verification LBA range: start 0x0 length 0x4000 00:27:46.071 NVMe0n1 : 10.01 6177.84 24.13 0.00 0.00 20673.91 1109.64 3019898.88 00:27:46.071 =================================================================================================================== 00:27:46.071 Total : 6177.84 24.13 0.00 0.00 20673.91 1109.64 3019898.88 00:27:46.071 0 00:27:46.330 12:24:39 -- host/timeout.sh@97 -- # rpc_pid=78620 00:27:46.330 12:24:39 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:46.330 12:24:39 -- host/timeout.sh@98 -- # sleep 1 00:27:46.330 Running I/O for 10 seconds... 00:27:47.267 12:24:40 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.529 [2024-04-26 12:24:40.793795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.793999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64480 is same with the state(5) to be set 00:27:47.529 [2024-04-26 12:24:40.794269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.529 [2024-04-26 12:24:40.794300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.529 [2024-04-26 12:24:40.794323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.529 [2024-04-26 12:24:40.794334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.529 [2024-04-26 12:24:40.794348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.529 [2024-04-26 12:24:40.794358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.529 [2024-04-26 12:24:40.794370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.529 [2024-04-26 12:24:40.794379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.529 [2024-04-26 12:24:40.794390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.529 [2024-04-26 12:24:40.794399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.529 [2024-04-26 12:24:40.794411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.529 [2024-04-26 12:24:40.794421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.794973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.794991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.530 [2024-04-26 12:24:40.795557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.530 [2024-04-26 12:24:40.795678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.530 [2024-04-26 12:24:40.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.795906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.795926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.795948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.795969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.795980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.531 [2024-04-26 12:24:40.796255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.531 [2024-04-26 12:24:40.796535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.531 [2024-04-26 12:24:40.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.796936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.796981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.796992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.532 [2024-04-26 12:24:40.797286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.797312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.797332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.797353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.797373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.532 [2024-04-26 12:24:40.797384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.532 [2024-04-26 12:24:40.797392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.533 [2024-04-26 12:24:40.797404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.533 [2024-04-26 12:24:40.797413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.533 [2024-04-26 12:24:40.797425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.533 [2024-04-26 12:24:40.797433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.533 [2024-04-26 12:24:40.797444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4a110 is same with the state(5) to be set 00:27:47.533 [2024-04-26 12:24:40.797458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:47.533 [2024-04-26 12:24:40.797465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:47.533 [2024-04-26 12:24:40.797474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66168 len:8 PRP1 0x0 PRP2 0x0 00:27:47.533 [2024-04-26 12:24:40.797483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.533 [2024-04-26 12:24:40.798323] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e4a110 was disconnected and freed. reset controller. 00:27:47.533 [2024-04-26 12:24:40.798588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.533 [2024-04-26 12:24:40.798680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:47.533 [2024-04-26 12:24:40.798802] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.533 [2024-04-26 12:24:40.798852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.533 [2024-04-26 12:24:40.798892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.533 [2024-04-26 12:24:40.798908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de0e20 with addr=10.0.0.2, port=4420 00:27:47.533 [2024-04-26 12:24:40.798918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0e20 is same with the state(5) to be set 00:27:47.533 [2024-04-26 12:24:40.798937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:47.533 [2024-04-26 12:24:40.798952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.533 [2024-04-26 12:24:40.798961] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.533 [2024-04-26 12:24:40.798971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.533 [2024-04-26 12:24:40.798991] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.533 [2024-04-26 12:24:40.799009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.533 12:24:40 -- host/timeout.sh@101 -- # sleep 3 00:27:48.478 [2024-04-26 12:24:41.799162] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.478 [2024-04-26 12:24:41.799676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.478 [2024-04-26 12:24:41.799981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.478 [2024-04-26 12:24:41.800260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de0e20 with addr=10.0.0.2, port=4420 00:27:48.478 [2024-04-26 12:24:41.800511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0e20 is same with the state(5) to be set 00:27:48.478 [2024-04-26 12:24:41.800555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:48.478 [2024-04-26 12:24:41.800576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:48.478 [2024-04-26 12:24:41.800586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:48.478 [2024-04-26 12:24:41.800597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.478 [2024-04-26 12:24:41.800626] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:48.478 [2024-04-26 12:24:41.800638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.416 [2024-04-26 12:24:42.800801] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.417 [2024-04-26 12:24:42.800920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.417 [2024-04-26 12:24:42.800964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.417 [2024-04-26 12:24:42.800981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de0e20 with addr=10.0.0.2, port=4420 00:27:49.417 [2024-04-26 12:24:42.800996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0e20 is same with the state(5) to be set 00:27:49.417 [2024-04-26 12:24:42.801025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:49.417 [2024-04-26 12:24:42.801044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.417 [2024-04-26 12:24:42.801054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.417 [2024-04-26 12:24:42.801066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.417 [2024-04-26 12:24:42.801095] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.417 [2024-04-26 12:24:42.801107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.350 [2024-04-26 12:24:43.803552] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.350 [2024-04-26 12:24:43.803677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.350 [2024-04-26 12:24:43.803721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.350 [2024-04-26 12:24:43.803738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de0e20 with addr=10.0.0.2, port=4420 00:27:50.350 [2024-04-26 12:24:43.803752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0e20 is same with the state(5) to be set 00:27:50.350 [2024-04-26 12:24:43.804011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0e20 (9): Bad file descriptor 00:27:50.350 [2024-04-26 12:24:43.804285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.350 [2024-04-26 12:24:43.804301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.350 [2024-04-26 12:24:43.804312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.350 [2024-04-26 12:24:43.808392] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.350 [2024-04-26 12:24:43.808438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.350 12:24:43 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.942 [2024-04-26 12:24:44.104420] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.942 12:24:44 -- host/timeout.sh@103 -- # wait 78620 00:27:51.525 [2024-04-26 12:24:44.842073] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.820 00:27:56.820 Latency(us) 00:27:56.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.820 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:56.820 Verification LBA range: start 0x0 length 0x4000 00:27:56.820 NVMe0n1 : 10.01 5245.62 20.49 3680.76 0.00 14304.34 670.25 3019898.88 00:27:56.820 =================================================================================================================== 00:27:56.820 Total : 5245.62 20.49 3680.76 0.00 14304.34 0.00 3019898.88 00:27:56.820 0 00:27:56.820 12:24:49 -- host/timeout.sh@105 -- # killprocess 78492 00:27:56.820 12:24:49 -- common/autotest_common.sh@936 -- # '[' -z 78492 ']' 00:27:56.820 12:24:49 -- common/autotest_common.sh@940 -- # kill -0 78492 00:27:56.820 12:24:49 -- common/autotest_common.sh@941 -- # uname 00:27:56.820 12:24:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:56.820 12:24:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78492 00:27:56.820 killing process with pid 78492 00:27:56.820 Received shutdown signal, test time was about 10.000000 seconds 00:27:56.820 00:27:56.820 Latency(us) 00:27:56.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.820 =================================================================================================================== 00:27:56.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.820 12:24:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:56.820 12:24:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:56.820 12:24:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78492' 00:27:56.820 12:24:49 -- common/autotest_common.sh@955 -- # kill 78492 00:27:56.820 12:24:49 -- common/autotest_common.sh@960 -- # wait 78492 00:27:56.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:56.820 12:24:49 -- host/timeout.sh@110 -- # bdevperf_pid=78735 00:27:56.820 12:24:49 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:56.820 12:24:49 -- host/timeout.sh@112 -- # waitforlisten 78735 /var/tmp/bdevperf.sock 00:27:56.820 12:24:49 -- common/autotest_common.sh@817 -- # '[' -z 78735 ']' 00:27:56.820 12:24:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:56.820 12:24:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:56.820 12:24:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:56.820 12:24:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:56.820 12:24:49 -- common/autotest_common.sh@10 -- # set +x 00:27:56.820 [2024-04-26 12:24:50.025894] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:27:56.820 [2024-04-26 12:24:50.026245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78735 ] 00:27:56.820 [2024-04-26 12:24:50.170857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.078 [2024-04-26 12:24:50.292742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.643 12:24:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:57.643 12:24:51 -- common/autotest_common.sh@850 -- # return 0 00:27:57.643 12:24:51 -- host/timeout.sh@116 -- # dtrace_pid=78755 00:27:57.643 12:24:51 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 78735 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:57.643 12:24:51 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:57.900 12:24:51 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:58.467 NVMe0n1 00:27:58.467 12:24:51 -- host/timeout.sh@124 -- # rpc_pid=78792 00:27:58.467 12:24:51 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:58.467 12:24:51 -- host/timeout.sh@125 -- # sleep 1 00:27:58.467 Running I/O for 10 seconds... 00:27:59.402 12:24:52 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.663 [2024-04-26 12:24:52.918099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.663 [2024-04-26 12:24:52.918386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.663 [2024-04-26 12:24:52.918397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.918980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.918989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.664 [2024-04-26 12:24:52.919568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.664 [2024-04-26 12:24:52.919577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.919601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.919612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.919623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.919772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.920981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.920990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.665 [2024-04-26 12:24:52.921252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.665 [2024-04-26 12:24:52.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.921982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.921991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.922002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.922011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.922022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.922032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.922042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.922052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.922063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.922072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.922083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.666 [2024-04-26 12:24:52.922092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.666 [2024-04-26 12:24:52.922103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.667 [2024-04-26 12:24:52.922112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.667 [2024-04-26 12:24:52.922123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.667 [2024-04-26 12:24:52.922132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.667 [2024-04-26 12:24:52.922142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f320 is same with the state(5) to be set 00:27:59.667 [2024-04-26 12:24:52.922156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.667 [2024-04-26 12:24:52.922164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.667 [2024-04-26 12:24:52.922183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27440 len:8 PRP1 0x0 PRP2 0x0 00:27:59.667 [2024-04-26 12:24:52.922198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.667 [2024-04-26 12:24:52.922258] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d9f320 was disconnected and freed. reset controller. 00:27:59.667 [2024-04-26 12:24:52.922547] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.667 [2024-04-26 12:24:52.922635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3c090 (9): Bad file descriptor 00:27:59.667 [2024-04-26 12:24:52.922747] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-04-26 12:24:52.922812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-04-26 12:24:52.922856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-04-26 12:24:52.922872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3c090 with addr=10.0.0.2, port=4420 00:27:59.667 [2024-04-26 12:24:52.922882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3c090 is same with the state(5) to be set 00:27:59.667 [2024-04-26 12:24:52.922901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3c090 (9): Bad file descriptor 00:27:59.667 [2024-04-26 12:24:52.922916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.667 [2024-04-26 12:24:52.922926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.667 [2024-04-26 12:24:52.922936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.667 [2024-04-26 12:24:52.922956] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.667 [2024-04-26 12:24:52.922966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.667 12:24:52 -- host/timeout.sh@128 -- # wait 78792 00:28:01.570 [2024-04-26 12:24:54.923161] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.570 [2024-04-26 12:24:54.923288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.570 [2024-04-26 12:24:54.923334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.570 [2024-04-26 12:24:54.923351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3c090 with addr=10.0.0.2, port=4420 00:28:01.570 [2024-04-26 12:24:54.923365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3c090 is same with the state(5) to be set 00:28:01.570 [2024-04-26 12:24:54.923392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3c090 (9): Bad file descriptor 00:28:01.570 [2024-04-26 12:24:54.923424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:01.570 [2024-04-26 12:24:54.923436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:01.570 [2024-04-26 12:24:54.923446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:01.570 [2024-04-26 12:24:54.923474] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.570 [2024-04-26 12:24:54.923485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.495 [2024-04-26 12:24:56.923647] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.495 [2024-04-26 12:24:56.923751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.495 [2024-04-26 12:24:56.923797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.495 [2024-04-26 12:24:56.923813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3c090 with addr=10.0.0.2, port=4420 00:28:03.495 [2024-04-26 12:24:56.923828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3c090 is same with the state(5) to be set 00:28:03.495 [2024-04-26 12:24:56.923853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3c090 (9): Bad file descriptor 00:28:03.495 [2024-04-26 12:24:56.923872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.495 [2024-04-26 12:24:56.923882] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.495 [2024-04-26 12:24:56.923893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.495 [2024-04-26 12:24:56.923920] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.495 [2024-04-26 12:24:56.923932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.027 [2024-04-26 12:24:58.924011] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.595 00:28:06.595 Latency(us) 00:28:06.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.595 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:28:06.595 NVMe0n1 : 8.17 2098.46 8.20 15.67 0.00 60484.36 8400.52 7015926.69 00:28:06.595 =================================================================================================================== 00:28:06.595 Total : 2098.46 8.20 15.67 0.00 60484.36 8400.52 7015926.69 00:28:06.595 0 00:28:06.595 12:24:59 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:06.595 Attaching 5 probes... 00:28:06.595 1304.015822: reset bdev controller NVMe0 00:28:06.595 1304.162048: reconnect bdev controller NVMe0 00:28:06.595 3304.476765: reconnect delay bdev controller NVMe0 00:28:06.595 3304.519760: reconnect bdev controller NVMe0 00:28:06.595 5304.999428: reconnect delay bdev controller NVMe0 00:28:06.595 5305.021925: reconnect bdev controller NVMe0 00:28:06.595 7305.449720: reconnect delay bdev controller NVMe0 00:28:06.595 7305.475706: reconnect bdev controller NVMe0 00:28:06.595 12:24:59 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:28:06.595 12:24:59 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:28:06.595 12:24:59 -- host/timeout.sh@136 -- # kill 78755 00:28:06.595 12:24:59 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:06.595 12:24:59 -- host/timeout.sh@139 -- # killprocess 78735 00:28:06.595 12:24:59 -- common/autotest_common.sh@936 -- # '[' -z 78735 ']' 00:28:06.595 12:24:59 -- common/autotest_common.sh@940 -- # kill -0 78735 00:28:06.595 12:24:59 -- common/autotest_common.sh@941 -- # uname 00:28:06.595 12:24:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:06.595 12:24:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78735 00:28:06.595 killing process with pid 78735 00:28:06.595 Received shutdown signal, test time was about 8.227765 seconds 00:28:06.595 00:28:06.595 Latency(us) 00:28:06.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.595 =================================================================================================================== 00:28:06.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.595 12:24:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:06.595 12:24:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:06.596 12:24:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78735' 00:28:06.596 12:24:59 -- common/autotest_common.sh@955 -- # kill 78735 00:28:06.596 12:24:59 -- common/autotest_common.sh@960 -- # wait 78735 00:28:06.854 12:25:00 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.112 12:25:00 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:28:07.112 12:25:00 -- host/timeout.sh@145 -- # nvmftestfini 00:28:07.112 12:25:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:07.112 12:25:00 -- nvmf/common.sh@117 -- # sync 00:28:07.112 12:25:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:07.112 12:25:00 -- nvmf/common.sh@120 -- # set +e 00:28:07.112 12:25:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.112 12:25:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:07.112 rmmod nvme_tcp 00:28:07.112 rmmod nvme_fabrics 00:28:07.112 rmmod nvme_keyring 00:28:07.112 12:25:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.112 12:25:00 -- nvmf/common.sh@124 -- # set -e 00:28:07.112 12:25:00 -- nvmf/common.sh@125 -- # return 0 00:28:07.112 12:25:00 -- nvmf/common.sh@478 -- # '[' -n 78293 ']' 00:28:07.112 12:25:00 -- nvmf/common.sh@479 -- # killprocess 78293 00:28:07.112 12:25:00 -- common/autotest_common.sh@936 -- # '[' -z 78293 ']' 00:28:07.112 12:25:00 -- common/autotest_common.sh@940 -- # kill -0 78293 00:28:07.112 12:25:00 -- common/autotest_common.sh@941 -- # uname 00:28:07.112 12:25:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:07.112 12:25:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78293 00:28:07.112 12:25:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:07.112 12:25:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:07.112 killing process with pid 78293 00:28:07.112 12:25:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78293' 00:28:07.112 12:25:00 -- common/autotest_common.sh@955 -- # kill 78293 00:28:07.112 12:25:00 -- common/autotest_common.sh@960 -- # wait 78293 00:28:07.680 12:25:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:07.680 12:25:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:07.680 12:25:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:07.680 12:25:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.680 12:25:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.680 12:25:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.680 12:25:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.680 12:25:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.680 12:25:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:07.680 00:28:07.680 real 0m47.422s 00:28:07.680 user 2m18.994s 00:28:07.680 sys 0m5.886s 00:28:07.680 12:25:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:07.680 ************************************ 00:28:07.680 END TEST nvmf_timeout 00:28:07.680 12:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:07.680 ************************************ 00:28:07.680 12:25:00 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:28:07.680 12:25:00 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:28:07.680 12:25:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:07.680 12:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:07.680 12:25:00 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:28:07.680 00:28:07.680 real 8m52.525s 00:28:07.680 user 20m57.489s 00:28:07.680 sys 2m24.882s 00:28:07.680 12:25:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:07.680 12:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:07.680 ************************************ 00:28:07.680 END TEST nvmf_tcp 00:28:07.680 ************************************ 00:28:07.680 12:25:01 -- spdk/autotest.sh@286 -- # [[ 1 -eq 0 ]] 00:28:07.680 12:25:01 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:28:07.680 12:25:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:07.680 12:25:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:07.680 12:25:01 -- common/autotest_common.sh@10 -- # set +x 00:28:07.680 ************************************ 00:28:07.680 START TEST nvmf_dif 00:28:07.680 ************************************ 00:28:07.680 12:25:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:28:07.938 * Looking for test storage... 00:28:07.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:07.938 12:25:01 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:07.938 12:25:01 -- nvmf/common.sh@7 -- # uname -s 00:28:07.938 12:25:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.938 12:25:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.938 12:25:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.938 12:25:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.938 12:25:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.938 12:25:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.938 12:25:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.938 12:25:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.938 12:25:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.938 12:25:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.938 12:25:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:28:07.938 12:25:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:28:07.938 12:25:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.938 12:25:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.938 12:25:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:07.938 12:25:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.938 12:25:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:07.938 12:25:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.938 12:25:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.938 12:25:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.938 12:25:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.938 12:25:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.938 12:25:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.938 12:25:01 -- paths/export.sh@5 -- # export PATH 00:28:07.938 12:25:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.938 12:25:01 -- nvmf/common.sh@47 -- # : 0 00:28:07.938 12:25:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:07.938 12:25:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:07.938 12:25:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.938 12:25:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.938 12:25:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.938 12:25:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:07.938 12:25:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:07.938 12:25:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:07.938 12:25:01 -- target/dif.sh@15 -- # NULL_META=16 00:28:07.938 12:25:01 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:07.938 12:25:01 -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:07.938 12:25:01 -- target/dif.sh@15 -- # NULL_DIF=1 00:28:07.938 12:25:01 -- target/dif.sh@135 -- # nvmftestinit 00:28:07.938 12:25:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:07.938 12:25:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.938 12:25:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:07.938 12:25:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:07.938 12:25:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:07.938 12:25:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.938 12:25:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:07.938 12:25:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.938 12:25:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:07.938 12:25:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:07.938 12:25:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:07.938 12:25:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:07.938 12:25:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:07.938 12:25:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:07.938 12:25:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.938 12:25:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.938 12:25:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:07.938 12:25:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:07.938 12:25:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:07.938 12:25:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:07.938 12:25:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:07.938 12:25:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.939 12:25:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:07.939 12:25:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:07.939 12:25:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:07.939 12:25:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:07.939 12:25:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:07.939 12:25:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:07.939 Cannot find device "nvmf_tgt_br" 00:28:07.939 12:25:01 -- nvmf/common.sh@155 -- # true 00:28:07.939 12:25:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:07.939 Cannot find device "nvmf_tgt_br2" 00:28:07.939 12:25:01 -- nvmf/common.sh@156 -- # true 00:28:07.939 12:25:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:07.939 12:25:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:07.939 Cannot find device "nvmf_tgt_br" 00:28:07.939 12:25:01 -- nvmf/common.sh@158 -- # true 00:28:07.939 12:25:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:07.939 Cannot find device "nvmf_tgt_br2" 00:28:07.939 12:25:01 -- nvmf/common.sh@159 -- # true 00:28:07.939 12:25:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:07.939 12:25:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:07.939 12:25:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:07.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:07.939 12:25:01 -- nvmf/common.sh@162 -- # true 00:28:07.939 12:25:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:07.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:07.939 12:25:01 -- nvmf/common.sh@163 -- # true 00:28:07.939 12:25:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:07.939 12:25:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:07.939 12:25:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:07.939 12:25:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:07.939 12:25:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:07.939 12:25:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:07.939 12:25:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:07.939 12:25:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:08.206 12:25:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:08.206 12:25:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:08.206 12:25:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:08.206 12:25:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:08.206 12:25:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:08.206 12:25:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:08.206 12:25:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:08.206 12:25:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:08.206 12:25:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:08.206 12:25:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:08.206 12:25:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:08.206 12:25:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:08.206 12:25:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:08.206 12:25:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:08.206 12:25:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:08.206 12:25:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:08.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:28:08.206 00:28:08.206 --- 10.0.0.2 ping statistics --- 00:28:08.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.206 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:28:08.206 12:25:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:08.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:08.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:28:08.206 00:28:08.206 --- 10.0.0.3 ping statistics --- 00:28:08.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.206 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:28:08.206 12:25:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:08.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:28:08.206 00:28:08.206 --- 10.0.0.1 ping statistics --- 00:28:08.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.206 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:28:08.206 12:25:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.206 12:25:01 -- nvmf/common.sh@422 -- # return 0 00:28:08.206 12:25:01 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:08.206 12:25:01 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:08.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:08.464 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:08.464 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:08.464 12:25:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.464 12:25:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:08.464 12:25:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:08.464 12:25:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.464 12:25:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:08.464 12:25:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:08.464 12:25:01 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:08.464 12:25:01 -- target/dif.sh@137 -- # nvmfappstart 00:28:08.464 12:25:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:08.464 12:25:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:08.464 12:25:01 -- common/autotest_common.sh@10 -- # set +x 00:28:08.723 12:25:01 -- nvmf/common.sh@470 -- # nvmfpid=79240 00:28:08.723 12:25:01 -- nvmf/common.sh@471 -- # waitforlisten 79240 00:28:08.723 12:25:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:08.723 12:25:01 -- common/autotest_common.sh@817 -- # '[' -z 79240 ']' 00:28:08.723 12:25:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.723 12:25:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:08.723 12:25:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.723 12:25:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:08.723 12:25:01 -- common/autotest_common.sh@10 -- # set +x 00:28:08.723 [2024-04-26 12:25:01.992023] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:28:08.723 [2024-04-26 12:25:01.992136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.723 [2024-04-26 12:25:02.129888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.981 [2024-04-26 12:25:02.254234] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.981 [2024-04-26 12:25:02.254309] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.981 [2024-04-26 12:25:02.254323] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.981 [2024-04-26 12:25:02.254333] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.981 [2024-04-26 12:25:02.254342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.981 [2024-04-26 12:25:02.254375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.551 12:25:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:09.551 12:25:02 -- common/autotest_common.sh@850 -- # return 0 00:28:09.551 12:25:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:09.551 12:25:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:09.551 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 12:25:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.812 12:25:03 -- target/dif.sh@139 -- # create_transport 00:28:09.812 12:25:03 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:09.812 12:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.812 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 [2024-04-26 12:25:03.033024] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.812 12:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.812 12:25:03 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:09.812 12:25:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:09.812 12:25:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:09.812 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 ************************************ 00:28:09.812 START TEST fio_dif_1_default 00:28:09.812 ************************************ 00:28:09.812 12:25:03 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:28:09.812 12:25:03 -- target/dif.sh@86 -- # create_subsystems 0 00:28:09.812 12:25:03 -- target/dif.sh@28 -- # local sub 00:28:09.812 12:25:03 -- target/dif.sh@30 -- # for sub in "$@" 00:28:09.812 12:25:03 -- target/dif.sh@31 -- # create_subsystem 0 00:28:09.812 12:25:03 -- target/dif.sh@18 -- # local sub_id=0 00:28:09.812 12:25:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:09.812 12:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.812 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 bdev_null0 00:28:09.812 12:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.812 12:25:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:09.812 12:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.812 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 12:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.812 12:25:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:09.812 12:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.812 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 12:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.812 12:25:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:09.812 12:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.812 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:09.812 [2024-04-26 12:25:03.141137] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.812 12:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.812 12:25:03 -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:09.812 12:25:03 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:09.812 12:25:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:09.812 12:25:03 -- nvmf/common.sh@521 -- # config=() 00:28:09.812 12:25:03 -- nvmf/common.sh@521 -- # local subsystem config 00:28:09.812 12:25:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:09.812 12:25:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.812 12:25:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:09.812 { 00:28:09.812 "params": { 00:28:09.812 "name": "Nvme$subsystem", 00:28:09.812 "trtype": "$TEST_TRANSPORT", 00:28:09.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.812 "adrfam": "ipv4", 00:28:09.812 "trsvcid": "$NVMF_PORT", 00:28:09.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.812 "hdgst": ${hdgst:-false}, 00:28:09.812 "ddgst": ${ddgst:-false} 00:28:09.812 }, 00:28:09.812 "method": "bdev_nvme_attach_controller" 00:28:09.812 } 00:28:09.812 EOF 00:28:09.812 )") 00:28:09.812 12:25:03 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.812 12:25:03 -- target/dif.sh@82 -- # gen_fio_conf 00:28:09.812 12:25:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:09.812 12:25:03 -- target/dif.sh@54 -- # local file 00:28:09.812 12:25:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:09.812 12:25:03 -- target/dif.sh@56 -- # cat 00:28:09.812 12:25:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:09.812 12:25:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:09.812 12:25:03 -- common/autotest_common.sh@1327 -- # shift 00:28:09.812 12:25:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:09.812 12:25:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:09.812 12:25:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:09.812 12:25:03 -- nvmf/common.sh@543 -- # cat 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:09.813 12:25:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:09.813 12:25:03 -- target/dif.sh@72 -- # (( file <= files )) 00:28:09.813 12:25:03 -- nvmf/common.sh@545 -- # jq . 00:28:09.813 12:25:03 -- nvmf/common.sh@546 -- # IFS=, 00:28:09.813 12:25:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:09.813 "params": { 00:28:09.813 "name": "Nvme0", 00:28:09.813 "trtype": "tcp", 00:28:09.813 "traddr": "10.0.0.2", 00:28:09.813 "adrfam": "ipv4", 00:28:09.813 "trsvcid": "4420", 00:28:09.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.813 "hdgst": false, 00:28:09.813 "ddgst": false 00:28:09.813 }, 00:28:09.813 "method": "bdev_nvme_attach_controller" 00:28:09.813 }' 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:09.813 12:25:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:09.813 12:25:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:09.813 12:25:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:09.813 12:25:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:09.813 12:25:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:09.813 12:25:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:10.078 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:10.078 fio-3.35 00:28:10.078 Starting 1 thread 00:28:22.278 00:28:22.278 filename0: (groupid=0, jobs=1): err= 0: pid=79311: Fri Apr 26 12:25:13 2024 00:28:22.278 read: IOPS=8559, BW=33.4MiB/s (35.1MB/s)(334MiB/10001msec) 00:28:22.278 slat (nsec): min=6282, max=54018, avg=8511.78, stdev=3016.60 00:28:22.278 clat (usec): min=351, max=4152, avg=442.21, stdev=37.96 00:28:22.278 lat (usec): min=357, max=4186, avg=450.72, stdev=38.52 00:28:22.278 clat percentiles (usec): 00:28:22.278 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 412], 20.00th=[ 424], 00:28:22.278 | 30.00th=[ 429], 40.00th=[ 437], 50.00th=[ 441], 60.00th=[ 449], 00:28:22.278 | 70.00th=[ 453], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 486], 00:28:22.278 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 611], 99.95th=[ 619], 00:28:22.278 | 99.99th=[ 660] 00:28:22.278 bw ( KiB/s): min=33085, max=35392, per=100.00%, avg=34248.74, stdev=560.78, samples=19 00:28:22.278 iops : min= 8271, max= 8848, avg=8562.16, stdev=140.21, samples=19 00:28:22.278 lat (usec) : 500=97.73%, 750=2.26% 00:28:22.278 lat (msec) : 2=0.01%, 10=0.01% 00:28:22.278 cpu : usr=84.17%, sys=14.06%, ctx=30, majf=0, minf=0 00:28:22.278 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:22.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.278 issued rwts: total=85600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.278 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:22.278 00:28:22.278 Run status group 0 (all jobs): 00:28:22.278 READ: bw=33.4MiB/s (35.1MB/s), 33.4MiB/s-33.4MiB/s (35.1MB/s-35.1MB/s), io=334MiB (351MB), run=10001-10001msec 00:28:22.278 12:25:14 -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:22.278 12:25:14 -- target/dif.sh@43 -- # local sub 00:28:22.278 12:25:14 -- target/dif.sh@45 -- # for sub in "$@" 00:28:22.278 12:25:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:22.278 12:25:14 -- target/dif.sh@36 -- # local sub_id=0 00:28:22.278 12:25:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:22.278 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.278 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.278 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.278 12:25:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:22.278 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.278 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.278 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.278 00:28:22.278 real 0m10.996s 00:28:22.278 user 0m9.079s 00:28:22.278 sys 0m1.658s 00:28:22.278 12:25:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:22.278 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.278 ************************************ 00:28:22.278 END TEST fio_dif_1_default 00:28:22.278 ************************************ 00:28:22.278 12:25:14 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:22.278 12:25:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:22.278 12:25:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.278 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 ************************************ 00:28:22.279 START TEST fio_dif_1_multi_subsystems 00:28:22.279 ************************************ 00:28:22.279 12:25:14 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:28:22.279 12:25:14 -- target/dif.sh@92 -- # local files=1 00:28:22.279 12:25:14 -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:22.279 12:25:14 -- target/dif.sh@28 -- # local sub 00:28:22.279 12:25:14 -- target/dif.sh@30 -- # for sub in "$@" 00:28:22.279 12:25:14 -- target/dif.sh@31 -- # create_subsystem 0 00:28:22.279 12:25:14 -- target/dif.sh@18 -- # local sub_id=0 00:28:22.279 12:25:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 bdev_null0 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 [2024-04-26 12:25:14.263251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@30 -- # for sub in "$@" 00:28:22.279 12:25:14 -- target/dif.sh@31 -- # create_subsystem 1 00:28:22.279 12:25:14 -- target/dif.sh@18 -- # local sub_id=1 00:28:22.279 12:25:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 bdev_null1 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.279 12:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.279 12:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.279 12:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.279 12:25:14 -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:22.279 12:25:14 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:22.279 12:25:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:22.279 12:25:14 -- nvmf/common.sh@521 -- # config=() 00:28:22.279 12:25:14 -- nvmf/common.sh@521 -- # local subsystem config 00:28:22.279 12:25:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:22.279 12:25:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:22.279 { 00:28:22.279 "params": { 00:28:22.279 "name": "Nvme$subsystem", 00:28:22.279 "trtype": "$TEST_TRANSPORT", 00:28:22.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.279 "adrfam": "ipv4", 00:28:22.279 "trsvcid": "$NVMF_PORT", 00:28:22.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.279 "hdgst": ${hdgst:-false}, 00:28:22.279 "ddgst": ${ddgst:-false} 00:28:22.279 }, 00:28:22.279 "method": "bdev_nvme_attach_controller" 00:28:22.279 } 00:28:22.279 EOF 00:28:22.279 )") 00:28:22.279 12:25:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:22.279 12:25:14 -- target/dif.sh@82 -- # gen_fio_conf 00:28:22.279 12:25:14 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:22.279 12:25:14 -- target/dif.sh@54 -- # local file 00:28:22.279 12:25:14 -- target/dif.sh@56 -- # cat 00:28:22.279 12:25:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:22.279 12:25:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:22.279 12:25:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:22.279 12:25:14 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:22.279 12:25:14 -- common/autotest_common.sh@1327 -- # shift 00:28:22.279 12:25:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:22.279 12:25:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:22.279 12:25:14 -- nvmf/common.sh@543 -- # cat 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:22.279 12:25:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:22.279 12:25:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:22.279 { 00:28:22.279 "params": { 00:28:22.279 "name": "Nvme$subsystem", 00:28:22.279 "trtype": "$TEST_TRANSPORT", 00:28:22.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.279 "adrfam": "ipv4", 00:28:22.279 "trsvcid": "$NVMF_PORT", 00:28:22.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.279 "hdgst": ${hdgst:-false}, 00:28:22.279 "ddgst": ${ddgst:-false} 00:28:22.279 }, 00:28:22.279 "method": "bdev_nvme_attach_controller" 00:28:22.279 } 00:28:22.279 EOF 00:28:22.279 )") 00:28:22.279 12:25:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:22.279 12:25:14 -- target/dif.sh@72 -- # (( file <= files )) 00:28:22.279 12:25:14 -- target/dif.sh@73 -- # cat 00:28:22.279 12:25:14 -- nvmf/common.sh@543 -- # cat 00:28:22.279 12:25:14 -- target/dif.sh@72 -- # (( file++ )) 00:28:22.279 12:25:14 -- nvmf/common.sh@545 -- # jq . 00:28:22.279 12:25:14 -- target/dif.sh@72 -- # (( file <= files )) 00:28:22.279 12:25:14 -- nvmf/common.sh@546 -- # IFS=, 00:28:22.279 12:25:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:22.279 "params": { 00:28:22.279 "name": "Nvme0", 00:28:22.279 "trtype": "tcp", 00:28:22.279 "traddr": "10.0.0.2", 00:28:22.279 "adrfam": "ipv4", 00:28:22.279 "trsvcid": "4420", 00:28:22.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:22.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:22.279 "hdgst": false, 00:28:22.279 "ddgst": false 00:28:22.279 }, 00:28:22.279 "method": "bdev_nvme_attach_controller" 00:28:22.279 },{ 00:28:22.279 "params": { 00:28:22.279 "name": "Nvme1", 00:28:22.279 "trtype": "tcp", 00:28:22.279 "traddr": "10.0.0.2", 00:28:22.279 "adrfam": "ipv4", 00:28:22.279 "trsvcid": "4420", 00:28:22.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:22.279 "hdgst": false, 00:28:22.279 "ddgst": false 00:28:22.279 }, 00:28:22.279 "method": "bdev_nvme_attach_controller" 00:28:22.279 }' 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:22.279 12:25:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:22.279 12:25:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:22.279 12:25:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:22.279 12:25:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:22.279 12:25:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:22.279 12:25:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:22.279 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:22.279 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:22.279 fio-3.35 00:28:22.279 Starting 2 threads 00:28:32.248 00:28:32.248 filename0: (groupid=0, jobs=1): err= 0: pid=79480: Fri Apr 26 12:25:25 2024 00:28:32.248 read: IOPS=4727, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:28:32.248 slat (nsec): min=6327, max=56479, avg=13304.11, stdev=3980.54 00:28:32.248 clat (usec): min=596, max=5686, avg=810.22, stdev=66.64 00:28:32.248 lat (usec): min=605, max=5720, avg=823.52, stdev=67.23 00:28:32.248 clat percentiles (usec): 00:28:32.248 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 775], 00:28:32.248 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 824], 00:28:32.248 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 881], 00:28:32.248 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 1090], 00:28:32.248 | 99.99th=[ 2900] 00:28:32.248 bw ( KiB/s): min=18528, max=19232, per=50.05%, avg=18927.16, stdev=197.66, samples=19 00:28:32.248 iops : min= 4632, max= 4808, avg=4731.79, stdev=49.41, samples=19 00:28:32.248 lat (usec) : 750=9.61%, 1000=90.32% 00:28:32.248 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:28:32.248 cpu : usr=90.20%, sys=8.44%, ctx=8, majf=0, minf=9 00:28:32.248 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.248 issued rwts: total=47276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.248 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:32.248 filename1: (groupid=0, jobs=1): err= 0: pid=79481: Fri Apr 26 12:25:25 2024 00:28:32.248 read: IOPS=4727, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:28:32.248 slat (nsec): min=6640, max=72907, avg=13353.80, stdev=3929.21 00:28:32.248 clat (usec): min=458, max=4824, avg=809.67, stdev=55.73 00:28:32.248 lat (usec): min=465, max=4849, avg=823.02, stdev=55.95 00:28:32.248 clat percentiles (usec): 00:28:32.248 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 783], 00:28:32.248 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:28:32.248 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 873], 00:28:32.248 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 1045], 00:28:32.248 | 99.99th=[ 2900] 00:28:32.248 bw ( KiB/s): min=18560, max=19232, per=50.05%, avg=18929.11, stdev=193.65, samples=19 00:28:32.248 iops : min= 4640, max= 4808, avg=4732.26, stdev=48.44, samples=19 00:28:32.248 lat (usec) : 500=0.01%, 750=4.94%, 1000=94.97% 00:28:32.248 lat (msec) : 2=0.06%, 4=0.01%, 10=0.01% 00:28:32.248 cpu : usr=90.24%, sys=8.46%, ctx=18, majf=0, minf=9 00:28:32.248 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.248 issued rwts: total=47280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.248 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:32.248 00:28:32.248 Run status group 0 (all jobs): 00:28:32.249 READ: bw=36.9MiB/s (38.7MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=369MiB (387MB), run=10001-10001msec 00:28:32.249 12:25:25 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:32.249 12:25:25 -- target/dif.sh@43 -- # local sub 00:28:32.249 12:25:25 -- target/dif.sh@45 -- # for sub in "$@" 00:28:32.249 12:25:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:32.249 12:25:25 -- target/dif.sh@36 -- # local sub_id=0 00:28:32.249 12:25:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@45 -- # for sub in "$@" 00:28:32.249 12:25:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:32.249 12:25:25 -- target/dif.sh@36 -- # local sub_id=1 00:28:32.249 12:25:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 00:28:32.249 real 0m11.147s 00:28:32.249 user 0m18.840s 00:28:32.249 sys 0m1.994s 00:28:32.249 12:25:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:32.249 ************************************ 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 END TEST fio_dif_1_multi_subsystems 00:28:32.249 ************************************ 00:28:32.249 12:25:25 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:32.249 12:25:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:32.249 12:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 ************************************ 00:28:32.249 START TEST fio_dif_rand_params 00:28:32.249 ************************************ 00:28:32.249 12:25:25 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:28:32.249 12:25:25 -- target/dif.sh@100 -- # local NULL_DIF 00:28:32.249 12:25:25 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:32.249 12:25:25 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:32.249 12:25:25 -- target/dif.sh@103 -- # bs=128k 00:28:32.249 12:25:25 -- target/dif.sh@103 -- # numjobs=3 00:28:32.249 12:25:25 -- target/dif.sh@103 -- # iodepth=3 00:28:32.249 12:25:25 -- target/dif.sh@103 -- # runtime=5 00:28:32.249 12:25:25 -- target/dif.sh@105 -- # create_subsystems 0 00:28:32.249 12:25:25 -- target/dif.sh@28 -- # local sub 00:28:32.249 12:25:25 -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.249 12:25:25 -- target/dif.sh@31 -- # create_subsystem 0 00:28:32.249 12:25:25 -- target/dif.sh@18 -- # local sub_id=0 00:28:32.249 12:25:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 bdev_null0 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:32.249 12:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.249 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.249 [2024-04-26 12:25:25.524113] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.249 12:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.249 12:25:25 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:32.249 12:25:25 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:32.249 12:25:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:32.249 12:25:25 -- nvmf/common.sh@521 -- # config=() 00:28:32.249 12:25:25 -- nvmf/common.sh@521 -- # local subsystem config 00:28:32.249 12:25:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.249 12:25:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:32.249 12:25:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:32.249 { 00:28:32.249 "params": { 00:28:32.249 "name": "Nvme$subsystem", 00:28:32.249 "trtype": "$TEST_TRANSPORT", 00:28:32.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.249 "adrfam": "ipv4", 00:28:32.249 "trsvcid": "$NVMF_PORT", 00:28:32.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.249 "hdgst": ${hdgst:-false}, 00:28:32.249 "ddgst": ${ddgst:-false} 00:28:32.249 }, 00:28:32.249 "method": "bdev_nvme_attach_controller" 00:28:32.249 } 00:28:32.249 EOF 00:28:32.249 )") 00:28:32.249 12:25:25 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.249 12:25:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:32.249 12:25:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:32.249 12:25:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:32.249 12:25:25 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:32.249 12:25:25 -- common/autotest_common.sh@1327 -- # shift 00:28:32.249 12:25:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:32.249 12:25:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.249 12:25:25 -- nvmf/common.sh@543 -- # cat 00:28:32.249 12:25:25 -- target/dif.sh@82 -- # gen_fio_conf 00:28:32.249 12:25:25 -- target/dif.sh@54 -- # local file 00:28:32.249 12:25:25 -- target/dif.sh@56 -- # cat 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:32.249 12:25:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:32.249 12:25:25 -- nvmf/common.sh@545 -- # jq . 00:28:32.249 12:25:25 -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.249 12:25:25 -- nvmf/common.sh@546 -- # IFS=, 00:28:32.249 12:25:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:32.249 "params": { 00:28:32.249 "name": "Nvme0", 00:28:32.249 "trtype": "tcp", 00:28:32.249 "traddr": "10.0.0.2", 00:28:32.249 "adrfam": "ipv4", 00:28:32.249 "trsvcid": "4420", 00:28:32.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:32.249 "hdgst": false, 00:28:32.249 "ddgst": false 00:28:32.249 }, 00:28:32.249 "method": "bdev_nvme_attach_controller" 00:28:32.249 }' 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:32.249 12:25:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:32.249 12:25:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:32.249 12:25:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:32.249 12:25:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:32.249 12:25:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:32.249 12:25:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.249 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:32.249 ... 00:28:32.249 fio-3.35 00:28:32.249 Starting 3 threads 00:28:38.808 00:28:38.808 filename0: (groupid=0, jobs=1): err= 0: pid=79642: Fri Apr 26 12:25:31 2024 00:28:38.808 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5006msec) 00:28:38.808 slat (nsec): min=7544, max=46019, avg=16237.03, stdev=6051.92 00:28:38.808 clat (usec): min=11320, max=14330, avg=11467.89, stdev=149.57 00:28:38.808 lat (usec): min=11333, max=14364, avg=11484.13, stdev=150.62 00:28:38.808 clat percentiles (usec): 00:28:38.808 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:28:38.808 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:28:38.808 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11600], 00:28:38.808 | 99.00th=[11600], 99.50th=[11731], 99.90th=[14353], 99.95th=[14353], 00:28:38.808 | 99.99th=[14353] 00:28:38.808 bw ( KiB/s): min=33024, max=33792, per=33.33%, avg=33365.33, stdev=404.77, samples=9 00:28:38.808 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:28:38.808 lat (msec) : 20=100.00% 00:28:38.808 cpu : usr=91.65%, sys=7.83%, ctx=7, majf=0, minf=9 00:28:38.808 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:38.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.808 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.808 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:38.808 filename0: (groupid=0, jobs=1): err= 0: pid=79643: Fri Apr 26 12:25:31 2024 00:28:38.808 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5004msec) 00:28:38.808 slat (nsec): min=7716, max=43303, avg=17110.18, stdev=5600.55 00:28:38.808 clat (usec): min=11317, max=12651, avg=11463.01, stdev=79.05 00:28:38.808 lat (usec): min=11331, max=12676, avg=11480.12, stdev=80.08 00:28:38.808 clat percentiles (usec): 00:28:38.808 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:28:38.808 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:28:38.808 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11600], 00:28:38.808 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12649], 99.95th=[12649], 00:28:38.808 | 99.99th=[12649] 00:28:38.808 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33372.67, stdev=398.36, samples=9 00:28:38.808 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:28:38.808 lat (msec) : 20=100.00% 00:28:38.808 cpu : usr=91.82%, sys=7.68%, ctx=6, majf=0, minf=9 00:28:38.808 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:38.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.808 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.808 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:38.808 filename0: (groupid=0, jobs=1): err= 0: pid=79644: Fri Apr 26 12:25:31 2024 00:28:38.808 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5003msec) 00:28:38.808 slat (nsec): min=7566, max=42237, avg=16920.99, stdev=5747.36 00:28:38.808 clat (usec): min=11336, max=12114, avg=11461.70, stdev=58.21 00:28:38.808 lat (usec): min=11351, max=12138, avg=11478.63, stdev=59.13 00:28:38.808 clat percentiles (usec): 00:28:38.808 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11469], 20.00th=[11469], 00:28:38.808 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:28:38.808 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11600], 00:28:38.808 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12125], 99.95th=[12125], 00:28:38.808 | 99.99th=[12125] 00:28:38.808 bw ( KiB/s): min=33024, max=33792, per=33.33%, avg=33365.33, stdev=404.77, samples=9 00:28:38.808 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:28:38.808 lat (msec) : 20=100.00% 00:28:38.808 cpu : usr=91.36%, sys=8.12%, ctx=8, majf=0, minf=9 00:28:38.808 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:38.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.808 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.808 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:38.808 00:28:38.808 Run status group 0 (all jobs): 00:28:38.808 READ: bw=97.8MiB/s (103MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=489MiB (513MB), run=5003-5006msec 00:28:38.808 12:25:31 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:38.808 12:25:31 -- target/dif.sh@43 -- # local sub 00:28:38.808 12:25:31 -- target/dif.sh@45 -- # for sub in "$@" 00:28:38.808 12:25:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:38.808 12:25:31 -- target/dif.sh@36 -- # local sub_id=0 00:28:38.808 12:25:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.808 12:25:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.808 12:25:31 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:38.808 12:25:31 -- target/dif.sh@109 -- # bs=4k 00:28:38.808 12:25:31 -- target/dif.sh@109 -- # numjobs=8 00:28:38.808 12:25:31 -- target/dif.sh@109 -- # iodepth=16 00:28:38.808 12:25:31 -- target/dif.sh@109 -- # runtime= 00:28:38.808 12:25:31 -- target/dif.sh@109 -- # files=2 00:28:38.808 12:25:31 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:38.808 12:25:31 -- target/dif.sh@28 -- # local sub 00:28:38.808 12:25:31 -- target/dif.sh@30 -- # for sub in "$@" 00:28:38.808 12:25:31 -- target/dif.sh@31 -- # create_subsystem 0 00:28:38.808 12:25:31 -- target/dif.sh@18 -- # local sub_id=0 00:28:38.808 12:25:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 bdev_null0 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.808 12:25:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.808 12:25:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.808 12:25:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 [2024-04-26 12:25:31.522770] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.808 12:25:31 -- target/dif.sh@30 -- # for sub in "$@" 00:28:38.808 12:25:31 -- target/dif.sh@31 -- # create_subsystem 1 00:28:38.808 12:25:31 -- target/dif.sh@18 -- # local sub_id=1 00:28:38.808 12:25:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:38.808 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.808 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.808 bdev_null1 00:28:38.808 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@30 -- # for sub in "$@" 00:28:38.809 12:25:31 -- target/dif.sh@31 -- # create_subsystem 2 00:28:38.809 12:25:31 -- target/dif.sh@18 -- # local sub_id=2 00:28:38.809 12:25:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 bdev_null2 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:38.809 12:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.809 12:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.809 12:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.809 12:25:31 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:38.809 12:25:31 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:38.809 12:25:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:38.809 12:25:31 -- nvmf/common.sh@521 -- # config=() 00:28:38.809 12:25:31 -- nvmf/common.sh@521 -- # local subsystem config 00:28:38.809 12:25:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:38.809 12:25:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.809 12:25:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:38.809 { 00:28:38.809 "params": { 00:28:38.809 "name": "Nvme$subsystem", 00:28:38.809 "trtype": "$TEST_TRANSPORT", 00:28:38.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.809 "adrfam": "ipv4", 00:28:38.809 "trsvcid": "$NVMF_PORT", 00:28:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.809 "hdgst": ${hdgst:-false}, 00:28:38.809 "ddgst": ${ddgst:-false} 00:28:38.809 }, 00:28:38.809 "method": "bdev_nvme_attach_controller" 00:28:38.809 } 00:28:38.809 EOF 00:28:38.809 )") 00:28:38.809 12:25:31 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.809 12:25:31 -- target/dif.sh@82 -- # gen_fio_conf 00:28:38.809 12:25:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:38.809 12:25:31 -- target/dif.sh@54 -- # local file 00:28:38.809 12:25:31 -- target/dif.sh@56 -- # cat 00:28:38.809 12:25:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:38.809 12:25:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:38.809 12:25:31 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:38.809 12:25:31 -- common/autotest_common.sh@1327 -- # shift 00:28:38.809 12:25:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:38.809 12:25:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.809 12:25:31 -- nvmf/common.sh@543 -- # cat 00:28:38.809 12:25:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:38.809 12:25:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:38.809 12:25:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:38.809 12:25:31 -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.809 12:25:31 -- target/dif.sh@73 -- # cat 00:28:38.809 12:25:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:38.809 12:25:31 -- target/dif.sh@72 -- # (( file++ )) 00:28:38.809 12:25:31 -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.809 12:25:31 -- target/dif.sh@73 -- # cat 00:28:38.809 12:25:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:38.809 12:25:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:38.809 { 00:28:38.809 "params": { 00:28:38.809 "name": "Nvme$subsystem", 00:28:38.809 "trtype": "$TEST_TRANSPORT", 00:28:38.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.809 "adrfam": "ipv4", 00:28:38.809 "trsvcid": "$NVMF_PORT", 00:28:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.809 "hdgst": ${hdgst:-false}, 00:28:38.809 "ddgst": ${ddgst:-false} 00:28:38.809 }, 00:28:38.809 "method": "bdev_nvme_attach_controller" 00:28:38.809 } 00:28:38.809 EOF 00:28:38.809 )") 00:28:38.809 12:25:31 -- nvmf/common.sh@543 -- # cat 00:28:38.809 12:25:31 -- target/dif.sh@72 -- # (( file++ )) 00:28:38.809 12:25:31 -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.809 12:25:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:38.809 12:25:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:38.809 { 00:28:38.809 "params": { 00:28:38.809 "name": "Nvme$subsystem", 00:28:38.809 "trtype": "$TEST_TRANSPORT", 00:28:38.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.809 "adrfam": "ipv4", 00:28:38.809 "trsvcid": "$NVMF_PORT", 00:28:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.809 "hdgst": ${hdgst:-false}, 00:28:38.809 "ddgst": ${ddgst:-false} 00:28:38.809 }, 00:28:38.809 "method": "bdev_nvme_attach_controller" 00:28:38.809 } 00:28:38.809 EOF 00:28:38.809 )") 00:28:38.809 12:25:31 -- nvmf/common.sh@543 -- # cat 00:28:38.809 12:25:31 -- nvmf/common.sh@545 -- # jq . 00:28:38.809 12:25:31 -- nvmf/common.sh@546 -- # IFS=, 00:28:38.809 12:25:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:38.809 "params": { 00:28:38.809 "name": "Nvme0", 00:28:38.809 "trtype": "tcp", 00:28:38.809 "traddr": "10.0.0.2", 00:28:38.809 "adrfam": "ipv4", 00:28:38.809 "trsvcid": "4420", 00:28:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.809 "hdgst": false, 00:28:38.809 "ddgst": false 00:28:38.809 }, 00:28:38.809 "method": "bdev_nvme_attach_controller" 00:28:38.809 },{ 00:28:38.809 "params": { 00:28:38.809 "name": "Nvme1", 00:28:38.809 "trtype": "tcp", 00:28:38.809 "traddr": "10.0.0.2", 00:28:38.809 "adrfam": "ipv4", 00:28:38.809 "trsvcid": "4420", 00:28:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.809 "hdgst": false, 00:28:38.809 "ddgst": false 00:28:38.809 }, 00:28:38.809 "method": "bdev_nvme_attach_controller" 00:28:38.809 },{ 00:28:38.809 "params": { 00:28:38.809 "name": "Nvme2", 00:28:38.809 "trtype": "tcp", 00:28:38.809 "traddr": "10.0.0.2", 00:28:38.809 "adrfam": "ipv4", 00:28:38.809 "trsvcid": "4420", 00:28:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:38.809 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:38.809 "hdgst": false, 00:28:38.809 "ddgst": false 00:28:38.809 }, 00:28:38.809 "method": "bdev_nvme_attach_controller" 00:28:38.809 }' 00:28:38.809 12:25:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:38.809 12:25:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:38.809 12:25:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.809 12:25:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:38.810 12:25:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:38.810 12:25:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:38.810 12:25:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:38.810 12:25:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:38.810 12:25:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:38.810 12:25:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.810 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:38.810 ... 00:28:38.810 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:38.810 ... 00:28:38.810 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:38.810 ... 00:28:38.810 fio-3.35 00:28:38.810 Starting 24 threads 00:28:51.011 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79739: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=233, BW=932KiB/s (955kB/s)(9360KiB/10040msec) 00:28:51.011 slat (usec): min=3, max=8033, avg=33.86, stdev=348.18 00:28:51.011 clat (msec): min=25, max=122, avg=68.44, stdev=17.38 00:28:51.011 lat (msec): min=25, max=122, avg=68.48, stdev=17.38 00:28:51.011 clat percentiles (msec): 00:28:51.011 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.011 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:28:51.011 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 103], 00:28:51.011 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 123], 00:28:51.011 | 99.99th=[ 123] 00:28:51.011 bw ( KiB/s): min= 824, max= 1024, per=4.25%, avg=929.45, stdev=51.10, samples=20 00:28:51.011 iops : min= 206, max= 256, avg=232.35, stdev=12.78, samples=20 00:28:51.011 lat (msec) : 50=20.81%, 100=73.72%, 250=5.47% 00:28:51.011 cpu : usr=36.37%, sys=2.06%, ctx=1248, majf=0, minf=9 00:28:51.011 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79740: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=228, BW=912KiB/s (934kB/s)(9160KiB/10042msec) 00:28:51.011 slat (usec): min=6, max=8026, avg=31.91, stdev=373.96 00:28:51.011 clat (msec): min=26, max=131, avg=70.01, stdev=17.03 00:28:51.011 lat (msec): min=26, max=131, avg=70.04, stdev=17.04 00:28:51.011 clat percentiles (msec): 00:28:51.011 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:28:51.011 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:28:51.011 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 100], 00:28:51.011 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 123], 00:28:51.011 | 99.99th=[ 132] 00:28:51.011 bw ( KiB/s): min= 792, max= 992, per=4.15%, avg=909.25, stdev=55.49, samples=20 00:28:51.011 iops : min= 198, max= 248, avg=227.30, stdev=13.89, samples=20 00:28:51.011 lat (msec) : 50=19.08%, 100=76.16%, 250=4.76% 00:28:51.011 cpu : usr=31.38%, sys=1.84%, ctx=897, majf=0, minf=9 00:28:51.011 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79741: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=228, BW=912KiB/s (934kB/s)(9160KiB/10042msec) 00:28:51.011 slat (usec): min=4, max=8025, avg=24.77, stdev=264.63 00:28:51.011 clat (msec): min=34, max=131, avg=70.00, stdev=16.79 00:28:51.011 lat (msec): min=34, max=131, avg=70.03, stdev=16.79 00:28:51.011 clat percentiles (msec): 00:28:51.011 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 53], 00:28:51.011 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:28:51.011 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 104], 00:28:51.011 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:28:51.011 | 99.99th=[ 132] 00:28:51.011 bw ( KiB/s): min= 824, max= 1024, per=4.15%, avg=909.25, stdev=59.53, samples=20 00:28:51.011 iops : min= 206, max= 256, avg=227.30, stdev=14.90, samples=20 00:28:51.011 lat (msec) : 50=16.16%, 100=78.38%, 250=5.46% 00:28:51.011 cpu : usr=39.79%, sys=2.45%, ctx=1003, majf=0, minf=9 00:28:51.011 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79742: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=229, BW=917KiB/s (939kB/s)(9192KiB/10020msec) 00:28:51.011 slat (usec): min=4, max=8026, avg=25.15, stdev=289.31 00:28:51.011 clat (msec): min=29, max=143, avg=69.61, stdev=17.93 00:28:51.011 lat (msec): min=29, max=143, avg=69.63, stdev=17.93 00:28:51.011 clat percentiles (msec): 00:28:51.011 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.011 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:28:51.011 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 108], 00:28:51.011 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:28:51.011 | 99.99th=[ 144] 00:28:51.011 bw ( KiB/s): min= 763, max= 1024, per=4.18%, avg=914.95, stdev=70.08, samples=20 00:28:51.011 iops : min= 190, max= 256, avg=228.70, stdev=17.61, samples=20 00:28:51.011 lat (msec) : 50=22.32%, 100=71.98%, 250=5.70% 00:28:51.011 cpu : usr=31.36%, sys=1.60%, ctx=859, majf=0, minf=9 00:28:51.011 IO depths : 1=0.1%, 2=0.6%, 4=2.7%, 8=81.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79743: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=229, BW=920KiB/s (942kB/s)(9244KiB/10049msec) 00:28:51.011 slat (usec): min=3, max=8040, avg=26.12, stdev=272.20 00:28:51.011 clat (msec): min=9, max=139, avg=69.37, stdev=18.10 00:28:51.011 lat (msec): min=9, max=139, avg=69.40, stdev=18.11 00:28:51.011 clat percentiles (msec): 00:28:51.011 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:28:51.011 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:28:51.011 | 70.00th=[ 78], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 104], 00:28:51.011 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 136], 00:28:51.011 | 99.99th=[ 140] 00:28:51.011 bw ( KiB/s): min= 816, max= 1168, per=4.19%, avg=918.00, stdev=80.92, samples=20 00:28:51.011 iops : min= 204, max= 292, avg=229.50, stdev=20.23, samples=20 00:28:51.011 lat (msec) : 10=0.69%, 50=13.24%, 100=79.97%, 250=6.10% 00:28:51.011 cpu : usr=40.40%, sys=2.65%, ctx=1293, majf=0, minf=9 00:28:51.011 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79744: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=228, BW=913KiB/s (935kB/s)(9156KiB/10023msec) 00:28:51.011 slat (usec): min=3, max=4047, avg=20.69, stdev=145.47 00:28:51.011 clat (msec): min=27, max=138, avg=69.88, stdev=17.62 00:28:51.011 lat (msec): min=27, max=138, avg=69.90, stdev=17.63 00:28:51.011 clat percentiles (msec): 00:28:51.011 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:28:51.011 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.011 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 104], 00:28:51.011 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 123], 99.95th=[ 138], 00:28:51.011 | 99.99th=[ 138] 00:28:51.011 bw ( KiB/s): min= 764, max= 1024, per=4.16%, avg=911.85, stdev=89.53, samples=20 00:28:51.011 iops : min= 191, max= 256, avg=227.95, stdev=22.40, samples=20 00:28:51.011 lat (msec) : 50=16.64%, 100=76.80%, 250=6.55% 00:28:51.011 cpu : usr=42.98%, sys=2.42%, ctx=1146, majf=0, minf=9 00:28:51.011 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=88.3%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79745: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=232, BW=928KiB/s (951kB/s)(9332KiB/10053msec) 00:28:51.011 slat (usec): min=3, max=7032, avg=17.78, stdev=167.43 00:28:51.011 clat (usec): min=1466, max=140032, avg=68760.80, stdev=26384.78 00:28:51.011 lat (usec): min=1474, max=140049, avg=68778.57, stdev=26385.31 00:28:51.011 clat percentiles (usec): 00:28:51.011 | 1.00th=[ 1516], 5.00th=[ 1631], 10.00th=[ 45876], 20.00th=[ 53740], 00:28:51.011 | 30.00th=[ 64226], 40.00th=[ 71828], 50.00th=[ 72877], 60.00th=[ 77071], 00:28:51.011 | 70.00th=[ 80217], 80.00th=[ 83362], 90.00th=[ 95945], 95.00th=[105382], 00:28:51.011 | 99.00th=[122160], 99.50th=[131597], 99.90th=[135267], 99.95th=[139461], 00:28:51.011 | 99.99th=[139461] 00:28:51.011 bw ( KiB/s): min= 760, max= 2412, per=4.23%, avg=925.80, stdev=355.97, samples=20 00:28:51.011 iops : min= 190, max= 603, avg=231.45, stdev=88.99, samples=20 00:28:51.011 lat (msec) : 2=6.17%, 4=1.37%, 10=0.77%, 20=0.60%, 50=7.33% 00:28:51.011 lat (msec) : 100=75.91%, 250=7.84% 00:28:51.011 cpu : usr=41.13%, sys=2.45%, ctx=1360, majf=0, minf=0 00:28:51.011 IO depths : 1=0.4%, 2=2.2%, 4=7.6%, 8=74.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:51.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.011 issued rwts: total=2333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.011 filename0: (groupid=0, jobs=1): err= 0: pid=79746: Fri Apr 26 12:25:42 2024 00:28:51.011 read: IOPS=231, BW=928KiB/s (950kB/s)(9288KiB/10014msec) 00:28:51.012 slat (usec): min=4, max=8023, avg=21.10, stdev=209.03 00:28:51.012 clat (msec): min=21, max=135, avg=68.90, stdev=18.23 00:28:51.012 lat (msec): min=21, max=135, avg=68.92, stdev=18.23 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.012 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:28:51.012 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 91], 95.00th=[ 108], 00:28:51.012 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 136], 00:28:51.012 | 99.99th=[ 136] 00:28:51.012 bw ( KiB/s): min= 768, max= 1072, per=4.21%, avg=922.20, stdev=84.88, samples=20 00:28:51.012 iops : min= 192, max= 268, avg=230.55, stdev=21.22, samples=20 00:28:51.012 lat (msec) : 50=21.10%, 100=72.31%, 250=6.59% 00:28:51.012 cpu : usr=37.45%, sys=1.90%, ctx=1186, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=78.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79747: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=230, BW=922KiB/s (944kB/s)(9244KiB/10029msec) 00:28:51.012 slat (usec): min=3, max=10027, avg=26.52, stdev=298.35 00:28:51.012 clat (msec): min=34, max=121, avg=69.26, stdev=16.84 00:28:51.012 lat (msec): min=34, max=121, avg=69.29, stdev=16.85 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:28:51.012 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.012 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 100], 00:28:51.012 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:28:51.012 | 99.99th=[ 122] 00:28:51.012 bw ( KiB/s): min= 766, max= 1024, per=4.21%, avg=920.35, stdev=66.12, samples=20 00:28:51.012 iops : min= 191, max= 256, avg=230.05, stdev=16.60, samples=20 00:28:51.012 lat (msec) : 50=18.04%, 100=76.98%, 250=4.98% 00:28:51.012 cpu : usr=37.58%, sys=2.26%, ctx=1140, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=81.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79748: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=224, BW=898KiB/s (919kB/s)(9024KiB/10052msec) 00:28:51.012 slat (usec): min=4, max=8027, avg=26.69, stdev=279.65 00:28:51.012 clat (msec): min=5, max=123, avg=71.12, stdev=17.98 00:28:51.012 lat (msec): min=5, max=123, avg=71.14, stdev=17.97 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:28:51.012 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.012 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 102], 00:28:51.012 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 125], 00:28:51.012 | 99.99th=[ 125] 00:28:51.012 bw ( KiB/s): min= 784, max= 1126, per=4.09%, avg=895.50, stdev=90.00, samples=20 00:28:51.012 iops : min= 196, max= 281, avg=223.85, stdev=22.43, samples=20 00:28:51.012 lat (msec) : 10=1.33%, 20=0.09%, 50=11.79%, 100=81.25%, 250=5.54% 00:28:51.012 cpu : usr=36.71%, sys=2.04%, ctx=1026, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=80.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79749: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=220, BW=881KiB/s (902kB/s)(8844KiB/10042msec) 00:28:51.012 slat (usec): min=4, max=8049, avg=31.51, stdev=380.90 00:28:51.012 clat (msec): min=34, max=132, avg=72.47, stdev=16.98 00:28:51.012 lat (msec): min=35, max=132, avg=72.51, stdev=16.98 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:28:51.012 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.012 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:28:51.012 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:28:51.012 | 99.99th=[ 132] 00:28:51.012 bw ( KiB/s): min= 656, max= 1016, per=4.01%, avg=877.65, stdev=75.17, samples=20 00:28:51.012 iops : min= 164, max= 254, avg=219.40, stdev=18.80, samples=20 00:28:51.012 lat (msec) : 50=14.11%, 100=79.33%, 250=6.56% 00:28:51.012 cpu : usr=31.02%, sys=1.90%, ctx=860, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79750: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=223, BW=892KiB/s (914kB/s)(8976KiB/10059msec) 00:28:51.012 slat (usec): min=3, max=8055, avg=28.14, stdev=329.23 00:28:51.012 clat (msec): min=4, max=143, avg=71.46, stdev=18.99 00:28:51.012 lat (msec): min=4, max=143, avg=71.49, stdev=18.99 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:28:51.012 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:28:51.012 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 104], 00:28:51.012 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:28:51.012 | 99.99th=[ 144] 00:28:51.012 bw ( KiB/s): min= 656, max= 1383, per=4.07%, avg=890.75, stdev=142.06, samples=20 00:28:51.012 iops : min= 164, max= 345, avg=222.65, stdev=35.38, samples=20 00:28:51.012 lat (msec) : 10=1.43%, 20=1.34%, 50=11.23%, 100=80.21%, 250=5.79% 00:28:51.012 cpu : usr=37.86%, sys=2.33%, ctx=1421, majf=0, minf=0 00:28:51.012 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=79.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79751: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=212, BW=849KiB/s (869kB/s)(8504KiB/10017msec) 00:28:51.012 slat (usec): min=4, max=8026, avg=17.37, stdev=173.88 00:28:51.012 clat (msec): min=21, max=140, avg=75.27, stdev=18.82 00:28:51.012 lat (msec): min=21, max=140, avg=75.29, stdev=18.82 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 60], 00:28:51.012 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 79], 00:28:51.012 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 109], 00:28:51.012 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:28:51.012 | 99.99th=[ 140] 00:28:51.012 bw ( KiB/s): min= 656, max= 1000, per=3.85%, avg=844.00, stdev=107.00, samples=20 00:28:51.012 iops : min= 164, max= 250, avg=211.00, stdev=26.75, samples=20 00:28:51.012 lat (msec) : 50=12.37%, 100=78.08%, 250=9.55% 00:28:51.012 cpu : usr=36.09%, sys=2.02%, ctx=1306, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=3.1%, 4=12.4%, 8=70.3%, 16=14.2%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=90.4%, 8=6.8%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79752: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=228, BW=916KiB/s (938kB/s)(9192KiB/10039msec) 00:28:51.012 slat (usec): min=4, max=4049, avg=18.15, stdev=118.82 00:28:51.012 clat (msec): min=34, max=135, avg=69.75, stdev=17.11 00:28:51.012 lat (msec): min=34, max=135, avg=69.77, stdev=17.11 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:28:51.012 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.012 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 90], 95.00th=[ 102], 00:28:51.012 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 127], 00:28:51.012 | 99.99th=[ 136] 00:28:51.012 bw ( KiB/s): min= 784, max= 1000, per=4.17%, avg=912.60, stdev=64.45, samples=20 00:28:51.012 iops : min= 196, max= 250, avg=228.15, stdev=16.11, samples=20 00:28:51.012 lat (msec) : 50=17.32%, 100=77.20%, 250=5.48% 00:28:51.012 cpu : usr=41.28%, sys=2.48%, ctx=1549, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.012 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.012 filename1: (groupid=0, jobs=1): err= 0: pid=79753: Fri Apr 26 12:25:42 2024 00:28:51.012 read: IOPS=232, BW=928KiB/s (951kB/s)(9316KiB/10036msec) 00:28:51.012 slat (usec): min=3, max=8023, avg=21.67, stdev=203.37 00:28:51.012 clat (msec): min=34, max=127, avg=68.78, stdev=17.46 00:28:51.012 lat (msec): min=34, max=127, avg=68.80, stdev=17.46 00:28:51.012 clat percentiles (msec): 00:28:51.012 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.012 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 73], 00:28:51.012 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 100], 00:28:51.012 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:28:51.012 | 99.99th=[ 128] 00:28:51.012 bw ( KiB/s): min= 768, max= 1024, per=4.23%, avg=925.25, stdev=78.72, samples=20 00:28:51.012 iops : min= 192, max= 256, avg=231.30, stdev=19.71, samples=20 00:28:51.012 lat (msec) : 50=21.00%, 100=74.32%, 250=4.68% 00:28:51.012 cpu : usr=37.24%, sys=2.09%, ctx=1090, majf=0, minf=9 00:28:51.012 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:51.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.012 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename1: (groupid=0, jobs=1): err= 0: pid=79754: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=234, BW=938KiB/s (960kB/s)(9412KiB/10038msec) 00:28:51.013 slat (usec): min=4, max=8021, avg=23.42, stdev=218.59 00:28:51.013 clat (msec): min=24, max=137, avg=68.07, stdev=17.39 00:28:51.013 lat (msec): min=24, max=137, avg=68.09, stdev=17.39 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:28:51.013 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 73], 00:28:51.013 | 70.00th=[ 78], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 103], 00:28:51.013 | 99.00th=[ 118], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:28:51.013 | 99.99th=[ 138] 00:28:51.013 bw ( KiB/s): min= 744, max= 1024, per=4.27%, avg=934.80, stdev=78.52, samples=20 00:28:51.013 iops : min= 186, max= 256, avg=233.70, stdev=19.63, samples=20 00:28:51.013 lat (msec) : 50=18.44%, 100=75.95%, 250=5.61% 00:28:51.013 cpu : usr=42.68%, sys=2.64%, ctx=1307, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79755: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=229, BW=920KiB/s (942kB/s)(9224KiB/10027msec) 00:28:51.013 slat (usec): min=4, max=8022, avg=21.75, stdev=235.84 00:28:51.013 clat (msec): min=26, max=142, avg=69.43, stdev=17.40 00:28:51.013 lat (msec): min=26, max=142, avg=69.45, stdev=17.41 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.013 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 72], 00:28:51.013 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 89], 95.00th=[ 99], 00:28:51.013 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:28:51.013 | 99.99th=[ 144] 00:28:51.013 bw ( KiB/s): min= 766, max= 1024, per=4.18%, avg=915.50, stdev=81.51, samples=20 00:28:51.013 iops : min= 191, max= 256, avg=228.85, stdev=20.43, samples=20 00:28:51.013 lat (msec) : 50=21.03%, 100=74.33%, 250=4.64% 00:28:51.013 cpu : usr=31.13%, sys=1.81%, ctx=865, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79756: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=223, BW=894KiB/s (915kB/s)(8976KiB/10040msec) 00:28:51.013 slat (usec): min=4, max=8026, avg=24.81, stdev=267.33 00:28:51.013 clat (msec): min=32, max=135, avg=71.45, stdev=17.09 00:28:51.013 lat (msec): min=32, max=135, avg=71.48, stdev=17.10 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 56], 00:28:51.013 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.013 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 105], 00:28:51.013 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 136], 00:28:51.013 | 99.99th=[ 136] 00:28:51.013 bw ( KiB/s): min= 768, max= 1024, per=4.07%, avg=891.00, stdev=74.36, samples=20 00:28:51.013 iops : min= 192, max= 256, avg=222.75, stdev=18.59, samples=20 00:28:51.013 lat (msec) : 50=13.01%, 100=80.97%, 250=6.02% 00:28:51.013 cpu : usr=35.94%, sys=2.13%, ctx=1137, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79757: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=214, BW=858KiB/s (879kB/s)(8608KiB/10033msec) 00:28:51.013 slat (usec): min=3, max=8045, avg=18.20, stdev=173.20 00:28:51.013 clat (msec): min=37, max=141, avg=74.44, stdev=18.38 00:28:51.013 lat (msec): min=37, max=141, avg=74.46, stdev=18.38 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 48], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:28:51.013 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:28:51.013 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:28:51.013 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 142], 00:28:51.013 | 99.99th=[ 142] 00:28:51.013 bw ( KiB/s): min= 624, max= 1024, per=3.90%, avg=854.30, stdev=110.82, samples=20 00:28:51.013 iops : min= 156, max= 256, avg=213.55, stdev=27.72, samples=20 00:28:51.013 lat (msec) : 50=13.34%, 100=77.65%, 250=9.01% 00:28:51.013 cpu : usr=34.59%, sys=2.22%, ctx=999, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=2.9%, 4=11.5%, 8=71.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=90.2%, 8=7.3%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79758: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=227, BW=911KiB/s (933kB/s)(9148KiB/10042msec) 00:28:51.013 slat (usec): min=5, max=8032, avg=34.28, stdev=391.68 00:28:51.013 clat (msec): min=20, max=134, avg=70.03, stdev=17.40 00:28:51.013 lat (msec): min=20, max=134, avg=70.07, stdev=17.40 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:28:51.013 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:28:51.013 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 100], 00:28:51.013 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:28:51.013 | 99.99th=[ 136] 00:28:51.013 bw ( KiB/s): min= 768, max= 1024, per=4.15%, avg=908.05, stdev=72.89, samples=20 00:28:51.013 iops : min= 192, max= 256, avg=227.00, stdev=18.23, samples=20 00:28:51.013 lat (msec) : 50=16.35%, 100=78.79%, 250=4.85% 00:28:51.013 cpu : usr=35.07%, sys=2.09%, ctx=1207, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79759: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=232, BW=928KiB/s (951kB/s)(9312KiB/10031msec) 00:28:51.013 slat (usec): min=3, max=8025, avg=22.01, stdev=234.77 00:28:51.013 clat (msec): min=35, max=145, avg=68.77, stdev=17.33 00:28:51.013 lat (msec): min=35, max=145, avg=68.79, stdev=17.33 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.013 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:28:51.013 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 100], 00:28:51.013 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 146], 00:28:51.013 | 99.99th=[ 146] 00:28:51.013 bw ( KiB/s): min= 766, max= 1024, per=4.24%, avg=927.50, stdev=75.25, samples=20 00:28:51.013 iops : min= 191, max= 256, avg=231.85, stdev=18.87, samples=20 00:28:51.013 lat (msec) : 50=23.45%, 100=72.08%, 250=4.47% 00:28:51.013 cpu : usr=31.26%, sys=1.86%, ctx=866, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79760: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=239, BW=959KiB/s (982kB/s)(9596KiB/10011msec) 00:28:51.013 slat (usec): min=4, max=8027, avg=32.87, stdev=316.57 00:28:51.013 clat (msec): min=21, max=138, avg=66.62, stdev=17.66 00:28:51.013 lat (msec): min=21, max=138, avg=66.65, stdev=17.67 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:28:51.013 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 70], 60.00th=[ 72], 00:28:51.013 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 99], 00:28:51.013 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 140], 00:28:51.013 | 99.99th=[ 140] 00:28:51.013 bw ( KiB/s): min= 768, max= 1072, per=4.36%, avg=953.20, stdev=72.36, samples=20 00:28:51.013 iops : min= 192, max= 268, avg=238.30, stdev=18.09, samples=20 00:28:51.013 lat (msec) : 50=23.80%, 100=71.74%, 250=4.46% 00:28:51.013 cpu : usr=37.07%, sys=2.57%, ctx=1128, majf=0, minf=9 00:28:51.013 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 complete : 0=0.0%, 4=86.9%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.013 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.013 filename2: (groupid=0, jobs=1): err= 0: pid=79761: Fri Apr 26 12:25:42 2024 00:28:51.013 read: IOPS=231, BW=925KiB/s (948kB/s)(9276KiB/10024msec) 00:28:51.013 slat (usec): min=3, max=4082, avg=19.91, stdev=145.26 00:28:51.013 clat (msec): min=29, max=143, avg=69.01, stdev=17.85 00:28:51.013 lat (msec): min=29, max=143, avg=69.03, stdev=17.85 00:28:51.013 clat percentiles (msec): 00:28:51.013 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:28:51.013 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:28:51.013 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 106], 00:28:51.013 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 144], 00:28:51.013 | 99.99th=[ 144] 00:28:51.013 bw ( KiB/s): min= 768, max= 1024, per=4.22%, avg=923.40, stdev=83.09, samples=20 00:28:51.014 iops : min= 192, max= 256, avg=230.85, stdev=20.77, samples=20 00:28:51.014 lat (msec) : 50=19.19%, 100=74.56%, 250=6.25% 00:28:51.014 cpu : usr=38.25%, sys=2.10%, ctx=1229, majf=0, minf=9 00:28:51.014 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:28:51.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.014 complete : 0=0.0%, 4=88.0%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.014 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.014 filename2: (groupid=0, jobs=1): err= 0: pid=79762: Fri Apr 26 12:25:42 2024 00:28:51.014 read: IOPS=237, BW=949KiB/s (972kB/s)(9528KiB/10035msec) 00:28:51.014 slat (usec): min=4, max=8024, avg=21.12, stdev=218.05 00:28:51.014 clat (msec): min=24, max=123, avg=67.22, stdev=17.33 00:28:51.014 lat (msec): min=24, max=123, avg=67.24, stdev=17.33 00:28:51.014 clat percentiles (msec): 00:28:51.014 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:28:51.014 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:28:51.014 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 100], 00:28:51.014 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 124], 00:28:51.014 | 99.99th=[ 124] 00:28:51.014 bw ( KiB/s): min= 864, max= 1024, per=4.33%, avg=948.70, stdev=51.93, samples=20 00:28:51.014 iops : min= 216, max= 256, avg=237.15, stdev=12.98, samples=20 00:28:51.014 lat (msec) : 50=21.75%, 100=73.76%, 250=4.49% 00:28:51.014 cpu : usr=36.90%, sys=2.26%, ctx=1069, majf=0, minf=9 00:28:51.014 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:51.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.014 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.014 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:51.014 00:28:51.014 Run status group 0 (all jobs): 00:28:51.014 READ: bw=21.4MiB/s (22.4MB/s), 849KiB/s-959KiB/s (869kB/s-982kB/s), io=215MiB (225MB), run=10011-10059msec 00:28:51.014 12:25:42 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:51.014 12:25:42 -- target/dif.sh@43 -- # local sub 00:28:51.014 12:25:42 -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.014 12:25:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:51.014 12:25:42 -- target/dif.sh@36 -- # local sub_id=0 00:28:51.014 12:25:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.014 12:25:42 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:51.014 12:25:42 -- target/dif.sh@36 -- # local sub_id=1 00:28:51.014 12:25:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.014 12:25:42 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:51.014 12:25:42 -- target/dif.sh@36 -- # local sub_id=2 00:28:51.014 12:25:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:51.014 12:25:42 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:51.014 12:25:42 -- target/dif.sh@115 -- # numjobs=2 00:28:51.014 12:25:42 -- target/dif.sh@115 -- # iodepth=8 00:28:51.014 12:25:42 -- target/dif.sh@115 -- # runtime=5 00:28:51.014 12:25:42 -- target/dif.sh@115 -- # files=1 00:28:51.014 12:25:42 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:51.014 12:25:42 -- target/dif.sh@28 -- # local sub 00:28:51.014 12:25:42 -- target/dif.sh@30 -- # for sub in "$@" 00:28:51.014 12:25:42 -- target/dif.sh@31 -- # create_subsystem 0 00:28:51.014 12:25:42 -- target/dif.sh@18 -- # local sub_id=0 00:28:51.014 12:25:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 bdev_null0 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 [2024-04-26 12:25:42.906775] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@30 -- # for sub in "$@" 00:28:51.014 12:25:42 -- target/dif.sh@31 -- # create_subsystem 1 00:28:51.014 12:25:42 -- target/dif.sh@18 -- # local sub_id=1 00:28:51.014 12:25:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 bdev_null1 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.014 12:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.014 12:25:42 -- common/autotest_common.sh@10 -- # set +x 00:28:51.014 12:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.014 12:25:42 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:51.014 12:25:42 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:51.014 12:25:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:51.014 12:25:42 -- nvmf/common.sh@521 -- # config=() 00:28:51.014 12:25:42 -- nvmf/common.sh@521 -- # local subsystem config 00:28:51.014 12:25:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.014 12:25:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:51.014 12:25:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:51.014 { 00:28:51.014 "params": { 00:28:51.014 "name": "Nvme$subsystem", 00:28:51.014 "trtype": "$TEST_TRANSPORT", 00:28:51.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.014 "adrfam": "ipv4", 00:28:51.014 "trsvcid": "$NVMF_PORT", 00:28:51.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.014 "hdgst": ${hdgst:-false}, 00:28:51.014 "ddgst": ${ddgst:-false} 00:28:51.014 }, 00:28:51.014 "method": "bdev_nvme_attach_controller" 00:28:51.014 } 00:28:51.014 EOF 00:28:51.014 )") 00:28:51.014 12:25:42 -- target/dif.sh@82 -- # gen_fio_conf 00:28:51.014 12:25:42 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.014 12:25:42 -- target/dif.sh@54 -- # local file 00:28:51.014 12:25:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:51.014 12:25:42 -- target/dif.sh@56 -- # cat 00:28:51.014 12:25:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:51.014 12:25:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:51.014 12:25:42 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:51.014 12:25:42 -- common/autotest_common.sh@1327 -- # shift 00:28:51.014 12:25:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:51.014 12:25:42 -- nvmf/common.sh@543 -- # cat 00:28:51.014 12:25:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.014 12:25:42 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:51.014 12:25:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:51.014 12:25:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:51.014 12:25:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:51.014 12:25:42 -- target/dif.sh@72 -- # (( file <= files )) 00:28:51.014 12:25:42 -- target/dif.sh@73 -- # cat 00:28:51.014 12:25:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:51.014 12:25:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:51.014 { 00:28:51.014 "params": { 00:28:51.014 "name": "Nvme$subsystem", 00:28:51.014 "trtype": "$TEST_TRANSPORT", 00:28:51.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.014 "adrfam": "ipv4", 00:28:51.015 "trsvcid": "$NVMF_PORT", 00:28:51.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.015 "hdgst": ${hdgst:-false}, 00:28:51.015 "ddgst": ${ddgst:-false} 00:28:51.015 }, 00:28:51.015 "method": "bdev_nvme_attach_controller" 00:28:51.015 } 00:28:51.015 EOF 00:28:51.015 )") 00:28:51.015 12:25:42 -- nvmf/common.sh@543 -- # cat 00:28:51.015 12:25:42 -- target/dif.sh@72 -- # (( file++ )) 00:28:51.015 12:25:42 -- target/dif.sh@72 -- # (( file <= files )) 00:28:51.015 12:25:42 -- nvmf/common.sh@545 -- # jq . 00:28:51.015 12:25:42 -- nvmf/common.sh@546 -- # IFS=, 00:28:51.015 12:25:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:51.015 "params": { 00:28:51.015 "name": "Nvme0", 00:28:51.015 "trtype": "tcp", 00:28:51.015 "traddr": "10.0.0.2", 00:28:51.015 "adrfam": "ipv4", 00:28:51.015 "trsvcid": "4420", 00:28:51.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.015 "hdgst": false, 00:28:51.015 "ddgst": false 00:28:51.015 }, 00:28:51.015 "method": "bdev_nvme_attach_controller" 00:28:51.015 },{ 00:28:51.015 "params": { 00:28:51.015 "name": "Nvme1", 00:28:51.015 "trtype": "tcp", 00:28:51.015 "traddr": "10.0.0.2", 00:28:51.015 "adrfam": "ipv4", 00:28:51.015 "trsvcid": "4420", 00:28:51.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.015 "hdgst": false, 00:28:51.015 "ddgst": false 00:28:51.015 }, 00:28:51.015 "method": "bdev_nvme_attach_controller" 00:28:51.015 }' 00:28:51.015 12:25:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:51.015 12:25:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:51.015 12:25:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.015 12:25:42 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:51.015 12:25:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:51.015 12:25:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:51.015 12:25:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:51.015 12:25:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:51.015 12:25:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:51.015 12:25:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.015 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:51.015 ... 00:28:51.015 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:51.015 ... 00:28:51.015 fio-3.35 00:28:51.015 Starting 4 threads 00:28:56.288 00:28:56.288 filename0: (groupid=0, jobs=1): err= 0: pid=79907: Fri Apr 26 12:25:48 2024 00:28:56.288 read: IOPS=2259, BW=17.6MiB/s (18.5MB/s)(88.3MiB/5001msec) 00:28:56.288 slat (usec): min=6, max=604, avg=11.50, stdev= 6.80 00:28:56.288 clat (usec): min=658, max=10110, avg=3506.74, stdev=976.31 00:28:56.288 lat (usec): min=667, max=10119, avg=3518.23, stdev=976.71 00:28:56.288 clat percentiles (usec): 00:28:56.288 | 1.00th=[ 1385], 5.00th=[ 1418], 10.00th=[ 1434], 20.00th=[ 2966], 00:28:56.288 | 30.00th=[ 3326], 40.00th=[ 3785], 50.00th=[ 3916], 60.00th=[ 3916], 00:28:56.288 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4621], 00:28:56.288 | 99.00th=[ 5211], 99.50th=[ 5211], 99.90th=[ 6980], 99.95th=[ 8717], 00:28:56.288 | 99.99th=[10028] 00:28:56.288 bw ( KiB/s): min=15776, max=20656, per=27.63%, avg=18065.78, stdev=2024.21, samples=9 00:28:56.288 iops : min= 1972, max= 2582, avg=2258.22, stdev=253.03, samples=9 00:28:56.288 lat (usec) : 750=0.05%, 1000=0.12% 00:28:56.288 lat (msec) : 2=12.37%, 4=57.02%, 10=30.43%, 20=0.01% 00:28:56.288 cpu : usr=91.34%, sys=7.52%, ctx=11, majf=0, minf=0 00:28:56.288 IO depths : 1=0.1%, 2=8.1%, 4=60.5%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.288 complete : 0=0.0%, 4=96.9%, 8=3.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.288 issued rwts: total=11298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.288 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.288 filename0: (groupid=0, jobs=1): err= 0: pid=79908: Fri Apr 26 12:25:48 2024 00:28:56.288 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5003msec) 00:28:56.288 slat (nsec): min=3942, max=50676, avg=15299.77, stdev=3302.67 00:28:56.288 clat (usec): min=1174, max=7075, avg=4079.19, stdev=501.40 00:28:56.288 lat (usec): min=1182, max=7095, avg=4094.49, stdev=501.45 00:28:56.288 clat percentiles (usec): 00:28:56.288 | 1.00th=[ 2089], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3884], 00:28:56.288 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 4178], 60.00th=[ 4228], 00:28:56.288 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 4883], 00:28:56.288 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6456], 99.95th=[ 6652], 00:28:56.288 | 99.99th=[ 7046] 00:28:56.288 bw ( KiB/s): min=14544, max=16720, per=23.66%, avg=15472.00, stdev=655.33, samples=10 00:28:56.288 iops : min= 1818, max= 2090, avg=1934.00, stdev=81.92, samples=10 00:28:56.288 lat (msec) : 2=0.62%, 4=43.46%, 10=55.92% 00:28:56.288 cpu : usr=91.76%, sys=7.42%, ctx=4, majf=0, minf=10 00:28:56.288 IO depths : 1=0.1%, 2=20.3%, 4=53.8%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.288 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.288 issued rwts: total=9678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.288 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.288 filename1: (groupid=0, jobs=1): err= 0: pid=79909: Fri Apr 26 12:25:48 2024 00:28:56.288 read: IOPS=2041, BW=15.9MiB/s (16.7MB/s)(79.8MiB/5003msec) 00:28:56.288 slat (nsec): min=7495, max=40035, avg=14308.21, stdev=3922.63 00:28:56.288 clat (usec): min=788, max=7105, avg=3869.16, stdev=707.19 00:28:56.288 lat (usec): min=796, max=7119, avg=3883.47, stdev=707.81 00:28:56.288 clat percentiles (usec): 00:28:56.288 | 1.00th=[ 1401], 5.00th=[ 2212], 10.00th=[ 2999], 20.00th=[ 3392], 00:28:56.288 | 30.00th=[ 3884], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 4178], 00:28:56.288 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4752], 00:28:56.288 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5407], 99.95th=[ 5473], 00:28:56.288 | 99.99th=[ 7046] 00:28:56.288 bw ( KiB/s): min=14976, max=18880, per=24.66%, avg=16122.67, stdev=1232.52, samples=9 00:28:56.288 iops : min= 1872, max= 2360, avg=2015.33, stdev=154.06, samples=9 00:28:56.288 lat (usec) : 1000=0.13% 00:28:56.288 lat (msec) : 2=3.08%, 4=49.69%, 10=47.10% 00:28:56.288 cpu : usr=91.88%, sys=7.26%, ctx=21, majf=0, minf=9 00:28:56.288 IO depths : 1=0.1%, 2=16.0%, 4=56.3%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.288 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.289 issued rwts: total=10213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.289 filename1: (groupid=0, jobs=1): err= 0: pid=79910: Fri Apr 26 12:25:48 2024 00:28:56.289 read: IOPS=1939, BW=15.1MiB/s (15.9MB/s)(75.8MiB/5002msec) 00:28:56.289 slat (nsec): min=7670, max=51700, avg=15140.06, stdev=4531.58 00:28:56.289 clat (usec): min=998, max=6440, avg=4067.66, stdev=543.84 00:28:56.289 lat (usec): min=1006, max=6466, avg=4082.80, stdev=544.12 00:28:56.289 clat percentiles (usec): 00:28:56.289 | 1.00th=[ 1909], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3884], 00:28:56.289 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 4146], 60.00th=[ 4228], 00:28:56.289 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5014], 00:28:56.289 | 99.00th=[ 5407], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6259], 00:28:56.289 | 99.99th=[ 6456] 00:28:56.289 bw ( KiB/s): min=14544, max=16720, per=23.72%, avg=15510.20, stdev=731.12, samples=10 00:28:56.289 iops : min= 1818, max= 2090, avg=1938.70, stdev=91.42, samples=10 00:28:56.289 lat (usec) : 1000=0.02% 00:28:56.289 lat (msec) : 2=1.12%, 4=43.41%, 10=55.44% 00:28:56.289 cpu : usr=91.14%, sys=7.92%, ctx=6, majf=0, minf=9 00:28:56.289 IO depths : 1=0.1%, 2=20.1%, 4=53.9%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.289 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.289 issued rwts: total=9700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.289 00:28:56.289 Run status group 0 (all jobs): 00:28:56.289 READ: bw=63.9MiB/s (67.0MB/s), 15.1MiB/s-17.6MiB/s (15.8MB/s-18.5MB/s), io=319MiB (335MB), run=5001-5003msec 00:28:56.289 12:25:48 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:56.289 12:25:48 -- target/dif.sh@43 -- # local sub 00:28:56.289 12:25:48 -- target/dif.sh@45 -- # for sub in "$@" 00:28:56.289 12:25:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:56.289 12:25:48 -- target/dif.sh@36 -- # local sub_id=0 00:28:56.289 12:25:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:56.289 12:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:48 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 12:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:56.289 12:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:48 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:49 -- target/dif.sh@45 -- # for sub in "$@" 00:28:56.289 12:25:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:56.289 12:25:49 -- target/dif.sh@36 -- # local sub_id=1 00:28:56.289 12:25:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.289 12:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:56.289 12:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 00:28:56.289 real 0m23.528s 00:28:56.289 user 2m3.021s 00:28:56.289 sys 0m8.748s 00:28:56.289 12:25:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 ************************************ 00:28:56.289 END TEST fio_dif_rand_params 00:28:56.289 ************************************ 00:28:56.289 12:25:49 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:56.289 12:25:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.289 12:25:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 ************************************ 00:28:56.289 START TEST fio_dif_digest 00:28:56.289 ************************************ 00:28:56.289 12:25:49 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:28:56.289 12:25:49 -- target/dif.sh@123 -- # local NULL_DIF 00:28:56.289 12:25:49 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:56.289 12:25:49 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:56.289 12:25:49 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:56.289 12:25:49 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:56.289 12:25:49 -- target/dif.sh@127 -- # numjobs=3 00:28:56.289 12:25:49 -- target/dif.sh@127 -- # iodepth=3 00:28:56.289 12:25:49 -- target/dif.sh@127 -- # runtime=10 00:28:56.289 12:25:49 -- target/dif.sh@128 -- # hdgst=true 00:28:56.289 12:25:49 -- target/dif.sh@128 -- # ddgst=true 00:28:56.289 12:25:49 -- target/dif.sh@130 -- # create_subsystems 0 00:28:56.289 12:25:49 -- target/dif.sh@28 -- # local sub 00:28:56.289 12:25:49 -- target/dif.sh@30 -- # for sub in "$@" 00:28:56.289 12:25:49 -- target/dif.sh@31 -- # create_subsystem 0 00:28:56.289 12:25:49 -- target/dif.sh@18 -- # local sub_id=0 00:28:56.289 12:25:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:56.289 12:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 bdev_null0 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:56.289 12:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:56.289 12:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:56.289 12:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.289 12:25:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.289 [2024-04-26 12:25:49.178338] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.289 12:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.289 12:25:49 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:56.289 12:25:49 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:56.289 12:25:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:56.289 12:25:49 -- nvmf/common.sh@521 -- # config=() 00:28:56.289 12:25:49 -- nvmf/common.sh@521 -- # local subsystem config 00:28:56.289 12:25:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:56.289 12:25:49 -- target/dif.sh@82 -- # gen_fio_conf 00:28:56.289 12:25:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:56.289 12:25:49 -- target/dif.sh@54 -- # local file 00:28:56.289 12:25:49 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:56.289 12:25:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:56.289 { 00:28:56.289 "params": { 00:28:56.289 "name": "Nvme$subsystem", 00:28:56.289 "trtype": "$TEST_TRANSPORT", 00:28:56.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.289 "adrfam": "ipv4", 00:28:56.289 "trsvcid": "$NVMF_PORT", 00:28:56.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.289 "hdgst": ${hdgst:-false}, 00:28:56.289 "ddgst": ${ddgst:-false} 00:28:56.289 }, 00:28:56.289 "method": "bdev_nvme_attach_controller" 00:28:56.289 } 00:28:56.289 EOF 00:28:56.289 )") 00:28:56.289 12:25:49 -- target/dif.sh@56 -- # cat 00:28:56.289 12:25:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:56.289 12:25:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:56.289 12:25:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:56.289 12:25:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:56.289 12:25:49 -- common/autotest_common.sh@1327 -- # shift 00:28:56.289 12:25:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:56.289 12:25:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.289 12:25:49 -- nvmf/common.sh@543 -- # cat 00:28:56.289 12:25:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:56.289 12:25:49 -- target/dif.sh@72 -- # (( file <= files )) 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:56.289 12:25:49 -- nvmf/common.sh@545 -- # jq . 00:28:56.289 12:25:49 -- nvmf/common.sh@546 -- # IFS=, 00:28:56.289 12:25:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:56.289 "params": { 00:28:56.289 "name": "Nvme0", 00:28:56.289 "trtype": "tcp", 00:28:56.289 "traddr": "10.0.0.2", 00:28:56.289 "adrfam": "ipv4", 00:28:56.289 "trsvcid": "4420", 00:28:56.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.289 "hdgst": true, 00:28:56.289 "ddgst": true 00:28:56.289 }, 00:28:56.289 "method": "bdev_nvme_attach_controller" 00:28:56.289 }' 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:56.289 12:25:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:56.289 12:25:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:56.289 12:25:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:56.289 12:25:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:56.289 12:25:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:56.289 12:25:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:56.289 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:56.289 ... 00:28:56.289 fio-3.35 00:28:56.289 Starting 3 threads 00:29:08.516 00:29:08.516 filename0: (groupid=0, jobs=1): err= 0: pid=80022: Fri Apr 26 12:25:59 2024 00:29:08.516 read: IOPS=227, BW=28.5MiB/s (29.8MB/s)(285MiB/10004msec) 00:29:08.516 slat (nsec): min=7679, max=51504, avg=16909.69, stdev=5382.56 00:29:08.516 clat (usec): min=12955, max=13981, avg=13141.49, stdev=108.86 00:29:08.516 lat (usec): min=12969, max=14006, avg=13158.40, stdev=109.39 00:29:08.516 clat percentiles (usec): 00:29:08.516 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:29:08.516 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:29:08.517 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:29:08.517 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13960], 99.95th=[13960], 00:29:08.517 | 99.99th=[13960] 00:29:08.517 bw ( KiB/s): min=28416, max=29184, per=33.35%, avg=29143.58, stdev=176.19, samples=19 00:29:08.517 iops : min= 222, max= 228, avg=227.68, stdev= 1.38, samples=19 00:29:08.517 lat (msec) : 20=100.00% 00:29:08.517 cpu : usr=91.53%, sys=7.92%, ctx=11, majf=0, minf=0 00:29:08.517 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.517 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=80023: Fri Apr 26 12:25:59 2024 00:29:08.517 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10006msec) 00:29:08.517 slat (nsec): min=7742, max=76680, avg=17283.88, stdev=5253.78 00:29:08.517 clat (usec): min=12934, max=15587, avg=13142.41, stdev=141.05 00:29:08.517 lat (usec): min=12942, max=15609, avg=13159.69, stdev=141.64 00:29:08.517 clat percentiles (usec): 00:29:08.517 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:29:08.517 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:29:08.517 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:29:08.517 | 99.00th=[13566], 99.50th=[13566], 99.90th=[15533], 99.95th=[15533], 00:29:08.517 | 99.99th=[15533] 00:29:08.517 bw ( KiB/s): min=28472, max=29184, per=33.36%, avg=29146.53, stdev=163.34, samples=19 00:29:08.517 iops : min= 222, max= 228, avg=227.68, stdev= 1.38, samples=19 00:29:08.517 lat (msec) : 20=100.00% 00:29:08.517 cpu : usr=91.60%, sys=7.81%, ctx=8, majf=0, minf=0 00:29:08.517 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.517 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:08.517 filename0: (groupid=0, jobs=1): err= 0: pid=80024: Fri Apr 26 12:25:59 2024 00:29:08.517 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10007msec) 00:29:08.517 slat (nsec): min=7771, max=79984, avg=16874.36, stdev=5908.55 00:29:08.517 clat (usec): min=12931, max=16710, avg=13144.94, stdev=171.78 00:29:08.517 lat (usec): min=12945, max=16741, avg=13161.81, stdev=172.67 00:29:08.517 clat percentiles (usec): 00:29:08.517 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:29:08.517 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:29:08.517 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:29:08.517 | 99.00th=[13435], 99.50th=[13566], 99.90th=[16712], 99.95th=[16712], 00:29:08.517 | 99.99th=[16712] 00:29:08.517 bw ( KiB/s): min=28416, max=29184, per=33.35%, avg=29143.58, stdev=176.19, samples=19 00:29:08.517 iops : min= 222, max= 228, avg=227.68, stdev= 1.38, samples=19 00:29:08.517 lat (msec) : 20=100.00% 00:29:08.517 cpu : usr=91.40%, sys=8.03%, ctx=13, majf=0, minf=0 00:29:08.517 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.517 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:08.517 00:29:08.517 Run status group 0 (all jobs): 00:29:08.517 READ: bw=85.3MiB/s (89.5MB/s), 28.4MiB/s-28.5MiB/s (29.8MB/s-29.8MB/s), io=854MiB (895MB), run=10004-10007msec 00:29:08.517 12:26:00 -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:08.517 12:26:00 -- target/dif.sh@43 -- # local sub 00:29:08.517 12:26:00 -- target/dif.sh@45 -- # for sub in "$@" 00:29:08.517 12:26:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:08.517 12:26:00 -- target/dif.sh@36 -- # local sub_id=0 00:29:08.517 12:26:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:08.517 12:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.517 12:26:00 -- common/autotest_common.sh@10 -- # set +x 00:29:08.517 12:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.517 12:26:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:08.517 12:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.517 12:26:00 -- common/autotest_common.sh@10 -- # set +x 00:29:08.517 12:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.517 00:29:08.517 real 0m11.026s 00:29:08.517 user 0m28.124s 00:29:08.517 sys 0m2.658s 00:29:08.517 12:26:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:08.517 ************************************ 00:29:08.517 12:26:00 -- common/autotest_common.sh@10 -- # set +x 00:29:08.517 END TEST fio_dif_digest 00:29:08.517 ************************************ 00:29:08.517 12:26:00 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:08.517 12:26:00 -- target/dif.sh@147 -- # nvmftestfini 00:29:08.517 12:26:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:08.517 12:26:00 -- nvmf/common.sh@117 -- # sync 00:29:08.517 12:26:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.517 12:26:00 -- nvmf/common.sh@120 -- # set +e 00:29:08.517 12:26:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.517 12:26:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.517 rmmod nvme_tcp 00:29:08.517 rmmod nvme_fabrics 00:29:08.517 rmmod nvme_keyring 00:29:08.517 12:26:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.517 12:26:00 -- nvmf/common.sh@124 -- # set -e 00:29:08.517 12:26:00 -- nvmf/common.sh@125 -- # return 0 00:29:08.517 12:26:00 -- nvmf/common.sh@478 -- # '[' -n 79240 ']' 00:29:08.517 12:26:00 -- nvmf/common.sh@479 -- # killprocess 79240 00:29:08.517 12:26:00 -- common/autotest_common.sh@936 -- # '[' -z 79240 ']' 00:29:08.517 12:26:00 -- common/autotest_common.sh@940 -- # kill -0 79240 00:29:08.517 12:26:00 -- common/autotest_common.sh@941 -- # uname 00:29:08.517 12:26:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.517 12:26:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79240 00:29:08.517 12:26:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:08.517 12:26:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:08.517 killing process with pid 79240 00:29:08.517 12:26:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79240' 00:29:08.517 12:26:00 -- common/autotest_common.sh@955 -- # kill 79240 00:29:08.517 12:26:00 -- common/autotest_common.sh@960 -- # wait 79240 00:29:08.517 12:26:00 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:08.517 12:26:00 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:08.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:08.517 Waiting for block devices as requested 00:29:08.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:08.517 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:08.517 12:26:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:08.517 12:26:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:08.517 12:26:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.517 12:26:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:08.517 12:26:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.517 12:26:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:08.517 12:26:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.517 12:26:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:08.517 00:29:08.517 real 1m0.051s 00:29:08.517 user 3m48.247s 00:29:08.517 sys 0m19.978s 00:29:08.517 12:26:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:08.517 12:26:01 -- common/autotest_common.sh@10 -- # set +x 00:29:08.517 ************************************ 00:29:08.517 END TEST nvmf_dif 00:29:08.517 ************************************ 00:29:08.517 12:26:01 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:08.517 12:26:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:08.517 12:26:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:08.517 12:26:01 -- common/autotest_common.sh@10 -- # set +x 00:29:08.517 ************************************ 00:29:08.517 START TEST nvmf_abort_qd_sizes 00:29:08.517 ************************************ 00:29:08.517 12:26:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:08.517 * Looking for test storage... 00:29:08.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:08.517 12:26:01 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:08.517 12:26:01 -- nvmf/common.sh@7 -- # uname -s 00:29:08.517 12:26:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.517 12:26:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.517 12:26:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.517 12:26:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.517 12:26:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.517 12:26:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.517 12:26:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.517 12:26:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.517 12:26:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.517 12:26:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.517 12:26:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:29:08.517 12:26:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:29:08.517 12:26:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.517 12:26:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.517 12:26:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:08.517 12:26:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.517 12:26:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:08.517 12:26:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.517 12:26:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.517 12:26:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.518 12:26:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.518 12:26:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.518 12:26:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.518 12:26:01 -- paths/export.sh@5 -- # export PATH 00:29:08.518 12:26:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.518 12:26:01 -- nvmf/common.sh@47 -- # : 0 00:29:08.518 12:26:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:08.518 12:26:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:08.518 12:26:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.518 12:26:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.518 12:26:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.518 12:26:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:08.518 12:26:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:08.518 12:26:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:08.518 12:26:01 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:08.518 12:26:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:08.518 12:26:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.518 12:26:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:08.518 12:26:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:08.518 12:26:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:08.518 12:26:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.518 12:26:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:08.518 12:26:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.518 12:26:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:08.518 12:26:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:08.518 12:26:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:08.518 12:26:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:08.518 12:26:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:08.518 12:26:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:08.518 12:26:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.518 12:26:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.518 12:26:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:08.518 12:26:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:08.518 12:26:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:08.518 12:26:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:08.518 12:26:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:08.518 12:26:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.518 12:26:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:08.518 12:26:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:08.518 12:26:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:08.518 12:26:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:08.518 12:26:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:08.518 12:26:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:08.518 Cannot find device "nvmf_tgt_br" 00:29:08.518 12:26:01 -- nvmf/common.sh@155 -- # true 00:29:08.518 12:26:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:08.518 Cannot find device "nvmf_tgt_br2" 00:29:08.518 12:26:01 -- nvmf/common.sh@156 -- # true 00:29:08.518 12:26:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:08.518 12:26:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:08.518 Cannot find device "nvmf_tgt_br" 00:29:08.518 12:26:01 -- nvmf/common.sh@158 -- # true 00:29:08.518 12:26:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:08.518 Cannot find device "nvmf_tgt_br2" 00:29:08.518 12:26:01 -- nvmf/common.sh@159 -- # true 00:29:08.518 12:26:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:08.518 12:26:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:08.518 12:26:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:08.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:08.518 12:26:01 -- nvmf/common.sh@162 -- # true 00:29:08.518 12:26:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:08.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:08.518 12:26:01 -- nvmf/common.sh@163 -- # true 00:29:08.518 12:26:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:08.518 12:26:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:08.518 12:26:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:08.518 12:26:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:08.518 12:26:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:08.518 12:26:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:08.518 12:26:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:08.518 12:26:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:08.518 12:26:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:08.518 12:26:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:08.518 12:26:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:08.518 12:26:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:08.518 12:26:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:08.518 12:26:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:08.518 12:26:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:08.518 12:26:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:08.518 12:26:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:08.518 12:26:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:08.518 12:26:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:08.518 12:26:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:08.518 12:26:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:08.518 12:26:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:08.518 12:26:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:08.518 12:26:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:08.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:29:08.518 00:29:08.518 --- 10.0.0.2 ping statistics --- 00:29:08.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.518 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:08.518 12:26:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:08.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:08.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:29:08.518 00:29:08.518 --- 10.0.0.3 ping statistics --- 00:29:08.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.518 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:29:08.518 12:26:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:08.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:29:08.518 00:29:08.518 --- 10.0.0.1 ping statistics --- 00:29:08.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.518 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:29:08.518 12:26:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.518 12:26:01 -- nvmf/common.sh@422 -- # return 0 00:29:08.518 12:26:01 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:29:08.518 12:26:01 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:09.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:09.086 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:09.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:09.086 12:26:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.086 12:26:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:09.086 12:26:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:09.086 12:26:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.086 12:26:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:09.086 12:26:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:09.086 12:26:02 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:09.086 12:26:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:09.086 12:26:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:09.086 12:26:02 -- common/autotest_common.sh@10 -- # set +x 00:29:09.086 12:26:02 -- nvmf/common.sh@470 -- # nvmfpid=80614 00:29:09.086 12:26:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:09.086 12:26:02 -- nvmf/common.sh@471 -- # waitforlisten 80614 00:29:09.086 12:26:02 -- common/autotest_common.sh@817 -- # '[' -z 80614 ']' 00:29:09.086 12:26:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.086 12:26:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:09.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.086 12:26:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.086 12:26:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:09.086 12:26:02 -- common/autotest_common.sh@10 -- # set +x 00:29:09.344 [2024-04-26 12:26:02.591597] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:29:09.344 [2024-04-26 12:26:02.591727] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.344 [2024-04-26 12:26:02.737877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.603 [2024-04-26 12:26:02.871367] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.603 [2024-04-26 12:26:02.871430] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.603 [2024-04-26 12:26:02.871445] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.603 [2024-04-26 12:26:02.871456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.603 [2024-04-26 12:26:02.871465] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.603 [2024-04-26 12:26:02.871695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.603 [2024-04-26 12:26:02.872329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.603 [2024-04-26 12:26:02.872459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.603 [2024-04-26 12:26:02.872464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.169 12:26:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:10.169 12:26:03 -- common/autotest_common.sh@850 -- # return 0 00:29:10.169 12:26:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:10.169 12:26:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:10.169 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.169 12:26:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.169 12:26:03 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:10.169 12:26:03 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:10.169 12:26:03 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:10.169 12:26:03 -- scripts/common.sh@309 -- # local bdf bdfs 00:29:10.169 12:26:03 -- scripts/common.sh@310 -- # local nvmes 00:29:10.169 12:26:03 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:29:10.169 12:26:03 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:29:10.169 12:26:03 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:29:10.169 12:26:03 -- scripts/common.sh@295 -- # local bdf= 00:29:10.169 12:26:03 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:29:10.169 12:26:03 -- scripts/common.sh@230 -- # local class 00:29:10.169 12:26:03 -- scripts/common.sh@231 -- # local subclass 00:29:10.169 12:26:03 -- scripts/common.sh@232 -- # local progif 00:29:10.169 12:26:03 -- scripts/common.sh@233 -- # printf %02x 1 00:29:10.169 12:26:03 -- scripts/common.sh@233 -- # class=01 00:29:10.169 12:26:03 -- scripts/common.sh@234 -- # printf %02x 8 00:29:10.169 12:26:03 -- scripts/common.sh@234 -- # subclass=08 00:29:10.169 12:26:03 -- scripts/common.sh@235 -- # printf %02x 2 00:29:10.169 12:26:03 -- scripts/common.sh@235 -- # progif=02 00:29:10.169 12:26:03 -- scripts/common.sh@237 -- # hash lspci 00:29:10.169 12:26:03 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:29:10.169 12:26:03 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:29:10.169 12:26:03 -- scripts/common.sh@240 -- # grep -i -- -p02 00:29:10.169 12:26:03 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:29:10.169 12:26:03 -- scripts/common.sh@242 -- # tr -d '"' 00:29:10.169 12:26:03 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:10.169 12:26:03 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:29:10.169 12:26:03 -- scripts/common.sh@15 -- # local i 00:29:10.169 12:26:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:29:10.169 12:26:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:10.169 12:26:03 -- scripts/common.sh@24 -- # return 0 00:29:10.169 12:26:03 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:29:10.169 12:26:03 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:10.169 12:26:03 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:29:10.169 12:26:03 -- scripts/common.sh@15 -- # local i 00:29:10.169 12:26:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:29:10.169 12:26:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:10.169 12:26:03 -- scripts/common.sh@24 -- # return 0 00:29:10.169 12:26:03 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:29:10.169 12:26:03 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:10.169 12:26:03 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:29:10.169 12:26:03 -- scripts/common.sh@320 -- # uname -s 00:29:10.428 12:26:03 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:10.428 12:26:03 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:10.428 12:26:03 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:10.428 12:26:03 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:29:10.428 12:26:03 -- scripts/common.sh@320 -- # uname -s 00:29:10.428 12:26:03 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:10.428 12:26:03 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:10.428 12:26:03 -- scripts/common.sh@325 -- # (( 2 )) 00:29:10.428 12:26:03 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:10.428 12:26:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:10.428 12:26:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.428 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.428 ************************************ 00:29:10.428 START TEST spdk_target_abort 00:29:10.428 ************************************ 00:29:10.428 12:26:03 -- common/autotest_common.sh@1111 -- # spdk_target 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:29:10.428 12:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.428 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.428 spdk_targetn1 00:29:10.428 12:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.428 12:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.428 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.428 [2024-04-26 12:26:03.797570] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.428 12:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:10.428 12:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.428 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.428 12:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:10.428 12:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.428 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.428 12:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:10.428 12:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.428 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:10.428 [2024-04-26 12:26:03.829745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.428 12:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:10.428 12:26:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.429 12:26:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:10.429 12:26:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:13.710 Initializing NVMe Controllers 00:29:13.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:13.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:13.711 Initialization complete. Launching workers. 00:29:13.711 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10688, failed: 0 00:29:13.711 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1040, failed to submit 9648 00:29:13.711 success 862, unsuccess 178, failed 0 00:29:13.711 12:26:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:13.711 12:26:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:16.994 Initializing NVMe Controllers 00:29:16.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:16.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:16.994 Initialization complete. Launching workers. 00:29:16.994 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8861, failed: 0 00:29:16.994 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7690 00:29:16.994 success 378, unsuccess 793, failed 0 00:29:16.994 12:26:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:16.994 12:26:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:20.277 Initializing NVMe Controllers 00:29:20.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:20.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:20.277 Initialization complete. Launching workers. 00:29:20.277 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31687, failed: 0 00:29:20.277 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2264, failed to submit 29423 00:29:20.277 success 484, unsuccess 1780, failed 0 00:29:20.277 12:26:13 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:20.277 12:26:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.277 12:26:13 -- common/autotest_common.sh@10 -- # set +x 00:29:20.277 12:26:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.277 12:26:13 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:20.277 12:26:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.277 12:26:13 -- common/autotest_common.sh@10 -- # set +x 00:29:20.842 12:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.842 12:26:14 -- target/abort_qd_sizes.sh@61 -- # killprocess 80614 00:29:20.842 12:26:14 -- common/autotest_common.sh@936 -- # '[' -z 80614 ']' 00:29:20.842 12:26:14 -- common/autotest_common.sh@940 -- # kill -0 80614 00:29:20.842 12:26:14 -- common/autotest_common.sh@941 -- # uname 00:29:20.842 12:26:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:20.842 12:26:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80614 00:29:20.842 12:26:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:20.842 12:26:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:20.842 killing process with pid 80614 00:29:20.842 12:26:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80614' 00:29:20.842 12:26:14 -- common/autotest_common.sh@955 -- # kill 80614 00:29:20.842 12:26:14 -- common/autotest_common.sh@960 -- # wait 80614 00:29:21.100 00:29:21.100 real 0m10.772s 00:29:21.100 user 0m43.253s 00:29:21.100 sys 0m2.187s 00:29:21.100 12:26:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:21.100 12:26:14 -- common/autotest_common.sh@10 -- # set +x 00:29:21.100 ************************************ 00:29:21.100 END TEST spdk_target_abort 00:29:21.100 ************************************ 00:29:21.100 12:26:14 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:21.100 12:26:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:21.100 12:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:21.100 12:26:14 -- common/autotest_common.sh@10 -- # set +x 00:29:21.358 ************************************ 00:29:21.358 START TEST kernel_target_abort 00:29:21.358 ************************************ 00:29:21.358 12:26:14 -- common/autotest_common.sh@1111 -- # kernel_target 00:29:21.358 12:26:14 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:21.358 12:26:14 -- nvmf/common.sh@717 -- # local ip 00:29:21.358 12:26:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:21.358 12:26:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:21.358 12:26:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.358 12:26:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.358 12:26:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:21.358 12:26:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.358 12:26:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:21.358 12:26:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:21.358 12:26:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:21.358 12:26:14 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:21.358 12:26:14 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:21.358 12:26:14 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:21.358 12:26:14 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:21.358 12:26:14 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:21.358 12:26:14 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:21.358 12:26:14 -- nvmf/common.sh@628 -- # local block nvme 00:29:21.358 12:26:14 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:21.358 12:26:14 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:21.358 12:26:14 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:21.358 12:26:14 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:21.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:21.616 Waiting for block devices as requested 00:29:21.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:21.874 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:21.874 12:26:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:21.874 12:26:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:21.874 12:26:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:21.874 12:26:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:21.874 12:26:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:21.874 12:26:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:21.874 12:26:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:21.874 12:26:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:21.874 12:26:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:21.874 No valid GPT data, bailing 00:29:21.874 12:26:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:21.874 12:26:15 -- scripts/common.sh@391 -- # pt= 00:29:21.874 12:26:15 -- scripts/common.sh@392 -- # return 1 00:29:21.874 12:26:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:21.874 12:26:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:21.874 12:26:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:21.874 12:26:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:29:21.874 12:26:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:29:21.874 12:26:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:21.874 12:26:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:21.874 12:26:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:29:21.874 12:26:15 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:29:21.874 12:26:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:21.874 No valid GPT data, bailing 00:29:21.874 12:26:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:21.874 12:26:15 -- scripts/common.sh@391 -- # pt= 00:29:21.874 12:26:15 -- scripts/common.sh@392 -- # return 1 00:29:21.875 12:26:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:29:21.875 12:26:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:21.875 12:26:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:21.875 12:26:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:29:21.875 12:26:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:29:21.875 12:26:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:21.875 12:26:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:21.875 12:26:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:29:21.875 12:26:15 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:29:21.875 12:26:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:22.133 No valid GPT data, bailing 00:29:22.133 12:26:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:22.133 12:26:15 -- scripts/common.sh@391 -- # pt= 00:29:22.133 12:26:15 -- scripts/common.sh@392 -- # return 1 00:29:22.133 12:26:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:29:22.133 12:26:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:22.133 12:26:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:22.133 12:26:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:29:22.133 12:26:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:22.133 12:26:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:22.133 12:26:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:22.133 12:26:15 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:29:22.133 12:26:15 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:22.133 12:26:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:22.133 No valid GPT data, bailing 00:29:22.133 12:26:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:22.133 12:26:15 -- scripts/common.sh@391 -- # pt= 00:29:22.133 12:26:15 -- scripts/common.sh@392 -- # return 1 00:29:22.133 12:26:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:29:22.133 12:26:15 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:29:22.133 12:26:15 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:22.133 12:26:15 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:22.133 12:26:15 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:22.133 12:26:15 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:22.133 12:26:15 -- nvmf/common.sh@656 -- # echo 1 00:29:22.133 12:26:15 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:29:22.133 12:26:15 -- nvmf/common.sh@658 -- # echo 1 00:29:22.133 12:26:15 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:22.133 12:26:15 -- nvmf/common.sh@661 -- # echo tcp 00:29:22.133 12:26:15 -- nvmf/common.sh@662 -- # echo 4420 00:29:22.133 12:26:15 -- nvmf/common.sh@663 -- # echo ipv4 00:29:22.133 12:26:15 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:22.133 12:26:15 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 --hostid=df75fdfd-6375-420c-96e8-b6b24e93a083 -a 10.0.0.1 -t tcp -s 4420 00:29:22.133 00:29:22.133 Discovery Log Number of Records 2, Generation counter 2 00:29:22.133 =====Discovery Log Entry 0====== 00:29:22.133 trtype: tcp 00:29:22.133 adrfam: ipv4 00:29:22.133 subtype: current discovery subsystem 00:29:22.133 treq: not specified, sq flow control disable supported 00:29:22.133 portid: 1 00:29:22.133 trsvcid: 4420 00:29:22.133 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:22.133 traddr: 10.0.0.1 00:29:22.133 eflags: none 00:29:22.133 sectype: none 00:29:22.133 =====Discovery Log Entry 1====== 00:29:22.133 trtype: tcp 00:29:22.133 adrfam: ipv4 00:29:22.133 subtype: nvme subsystem 00:29:22.133 treq: not specified, sq flow control disable supported 00:29:22.133 portid: 1 00:29:22.133 trsvcid: 4420 00:29:22.133 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:22.133 traddr: 10.0.0.1 00:29:22.133 eflags: none 00:29:22.133 sectype: none 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:22.133 12:26:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.410 Initializing NVMe Controllers 00:29:25.410 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:25.410 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:25.410 Initialization complete. Launching workers. 00:29:25.410 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32172, failed: 0 00:29:25.410 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32172, failed to submit 0 00:29:25.410 success 0, unsuccess 32172, failed 0 00:29:25.410 12:26:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:25.410 12:26:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:28.717 Initializing NVMe Controllers 00:29:28.717 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:28.717 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:28.717 Initialization complete. Launching workers. 00:29:28.717 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61650, failed: 0 00:29:28.717 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26216, failed to submit 35434 00:29:28.717 success 0, unsuccess 26216, failed 0 00:29:28.717 12:26:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:28.717 12:26:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:31.997 Initializing NVMe Controllers 00:29:31.997 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:31.997 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:31.997 Initialization complete. Launching workers. 00:29:31.997 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76582, failed: 0 00:29:31.997 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19180, failed to submit 57402 00:29:31.997 success 0, unsuccess 19180, failed 0 00:29:31.997 12:26:25 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:31.997 12:26:25 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:31.997 12:26:25 -- nvmf/common.sh@675 -- # echo 0 00:29:31.997 12:26:25 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.997 12:26:25 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:31.997 12:26:25 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:31.997 12:26:25 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.997 12:26:25 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:31.997 12:26:25 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:31.997 12:26:25 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:32.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:34.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:34.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:34.467 00:29:34.467 real 0m13.024s 00:29:34.467 user 0m6.307s 00:29:34.467 sys 0m4.086s 00:29:34.467 12:26:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:34.467 12:26:27 -- common/autotest_common.sh@10 -- # set +x 00:29:34.467 ************************************ 00:29:34.467 END TEST kernel_target_abort 00:29:34.467 ************************************ 00:29:34.467 12:26:27 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:34.467 12:26:27 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:34.467 12:26:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:34.467 12:26:27 -- nvmf/common.sh@117 -- # sync 00:29:34.467 12:26:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.467 12:26:27 -- nvmf/common.sh@120 -- # set +e 00:29:34.467 12:26:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.467 12:26:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.467 rmmod nvme_tcp 00:29:34.467 rmmod nvme_fabrics 00:29:34.467 rmmod nvme_keyring 00:29:34.467 12:26:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.467 12:26:27 -- nvmf/common.sh@124 -- # set -e 00:29:34.467 12:26:27 -- nvmf/common.sh@125 -- # return 0 00:29:34.467 12:26:27 -- nvmf/common.sh@478 -- # '[' -n 80614 ']' 00:29:34.467 12:26:27 -- nvmf/common.sh@479 -- # killprocess 80614 00:29:34.467 12:26:27 -- common/autotest_common.sh@936 -- # '[' -z 80614 ']' 00:29:34.467 12:26:27 -- common/autotest_common.sh@940 -- # kill -0 80614 00:29:34.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (80614) - No such process 00:29:34.467 Process with pid 80614 is not found 00:29:34.467 12:26:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 80614 is not found' 00:29:34.467 12:26:27 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:34.467 12:26:27 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:34.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:34.725 Waiting for block devices as requested 00:29:34.725 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:34.984 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:34.984 12:26:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:34.984 12:26:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:34.984 12:26:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:34.984 12:26:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:34.984 12:26:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.984 12:26:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:34.984 12:26:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.984 12:26:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:34.984 ************************************ 00:29:34.984 END TEST nvmf_abort_qd_sizes 00:29:34.984 ************************************ 00:29:34.984 00:29:34.984 real 0m27.090s 00:29:34.984 user 0m50.766s 00:29:34.984 sys 0m7.674s 00:29:34.984 12:26:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:34.984 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:29:34.984 12:26:28 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:34.984 12:26:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:34.984 12:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:34.984 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:29:35.243 ************************************ 00:29:35.243 START TEST keyring_file 00:29:35.243 ************************************ 00:29:35.243 12:26:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:35.243 * Looking for test storage... 00:29:35.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:35.243 12:26:28 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:35.243 12:26:28 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:35.243 12:26:28 -- nvmf/common.sh@7 -- # uname -s 00:29:35.243 12:26:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.243 12:26:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.243 12:26:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.243 12:26:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.243 12:26:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.243 12:26:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.243 12:26:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.243 12:26:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.243 12:26:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.243 12:26:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.243 12:26:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df75fdfd-6375-420c-96e8-b6b24e93a083 00:29:35.243 12:26:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=df75fdfd-6375-420c-96e8-b6b24e93a083 00:29:35.243 12:26:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.243 12:26:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.243 12:26:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:35.243 12:26:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.243 12:26:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:35.243 12:26:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.243 12:26:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.243 12:26:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.243 12:26:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.243 12:26:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.243 12:26:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.243 12:26:28 -- paths/export.sh@5 -- # export PATH 00:29:35.243 12:26:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.243 12:26:28 -- nvmf/common.sh@47 -- # : 0 00:29:35.243 12:26:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.243 12:26:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.243 12:26:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.243 12:26:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.243 12:26:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.243 12:26:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.243 12:26:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.243 12:26:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.243 12:26:28 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:35.243 12:26:28 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:35.243 12:26:28 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:35.243 12:26:28 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:35.243 12:26:28 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:35.243 12:26:28 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:35.243 12:26:28 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:35.243 12:26:28 -- keyring/common.sh@15 -- # local name key digest path 00:29:35.244 12:26:28 -- keyring/common.sh@17 -- # name=key0 00:29:35.244 12:26:28 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:35.244 12:26:28 -- keyring/common.sh@17 -- # digest=0 00:29:35.244 12:26:28 -- keyring/common.sh@18 -- # mktemp 00:29:35.244 12:26:28 -- keyring/common.sh@18 -- # path=/tmp/tmp.EtLQlKw4zP 00:29:35.244 12:26:28 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:35.244 12:26:28 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:35.244 12:26:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:35.244 12:26:28 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:35.244 12:26:28 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:35.244 12:26:28 -- nvmf/common.sh@693 -- # digest=0 00:29:35.244 12:26:28 -- nvmf/common.sh@694 -- # python - 00:29:35.244 12:26:28 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EtLQlKw4zP 00:29:35.244 12:26:28 -- keyring/common.sh@23 -- # echo /tmp/tmp.EtLQlKw4zP 00:29:35.244 12:26:28 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.EtLQlKw4zP 00:29:35.244 12:26:28 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:35.244 12:26:28 -- keyring/common.sh@15 -- # local name key digest path 00:29:35.244 12:26:28 -- keyring/common.sh@17 -- # name=key1 00:29:35.244 12:26:28 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:35.244 12:26:28 -- keyring/common.sh@17 -- # digest=0 00:29:35.244 12:26:28 -- keyring/common.sh@18 -- # mktemp 00:29:35.244 12:26:28 -- keyring/common.sh@18 -- # path=/tmp/tmp.PS37I4nELP 00:29:35.244 12:26:28 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:35.244 12:26:28 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:35.244 12:26:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:35.244 12:26:28 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:35.244 12:26:28 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:35.244 12:26:28 -- nvmf/common.sh@693 -- # digest=0 00:29:35.244 12:26:28 -- nvmf/common.sh@694 -- # python - 00:29:35.244 12:26:28 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PS37I4nELP 00:29:35.244 12:26:28 -- keyring/common.sh@23 -- # echo /tmp/tmp.PS37I4nELP 00:29:35.244 12:26:28 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.PS37I4nELP 00:29:35.244 12:26:28 -- keyring/file.sh@30 -- # tgtpid=81504 00:29:35.244 12:26:28 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:35.244 12:26:28 -- keyring/file.sh@32 -- # waitforlisten 81504 00:29:35.244 12:26:28 -- common/autotest_common.sh@817 -- # '[' -z 81504 ']' 00:29:35.244 12:26:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.244 12:26:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:35.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.244 12:26:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.244 12:26:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:35.244 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:29:35.502 [2024-04-26 12:26:28.744997] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:29:35.502 [2024-04-26 12:26:28.745095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81504 ] 00:29:35.502 [2024-04-26 12:26:28.888521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.761 [2024-04-26 12:26:29.017663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.328 12:26:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:36.328 12:26:29 -- common/autotest_common.sh@850 -- # return 0 00:29:36.328 12:26:29 -- keyring/file.sh@33 -- # rpc_cmd 00:29:36.328 12:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.328 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:29:36.328 [2024-04-26 12:26:29.793063] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.587 null0 00:29:36.587 [2024-04-26 12:26:29.824972] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:36.587 [2024-04-26 12:26:29.825257] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:36.587 [2024-04-26 12:26:29.832990] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:36.587 12:26:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.587 12:26:29 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:36.587 12:26:29 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.587 12:26:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:36.587 12:26:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:36.587 12:26:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.587 12:26:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:36.587 12:26:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.587 12:26:29 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:36.587 12:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.587 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:29:36.587 [2024-04-26 12:26:29.844979] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:36.587 { 00:29:36.587 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.587 "secure_channel": false, 00:29:36.587 "listen_address": { 00:29:36.587 "trtype": "tcp", 00:29:36.587 "traddr": "127.0.0.1", 00:29:36.587 "trsvcid": "4420" 00:29:36.587 }, 00:29:36.587 "method": "nvmf_subsystem_add_listener", 00:29:36.587 "req_id": 1 00:29:36.587 } 00:29:36.587 Got JSON-RPC error response 00:29:36.587 response: 00:29:36.587 { 00:29:36.587 "code": -32602, 00:29:36.587 "message": "Invalid parameters" 00:29:36.587 } 00:29:36.587 12:26:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:36.587 12:26:29 -- common/autotest_common.sh@641 -- # es=1 00:29:36.587 12:26:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:36.587 12:26:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:36.587 12:26:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:36.587 12:26:29 -- keyring/file.sh@46 -- # bperfpid=81521 00:29:36.587 12:26:29 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:36.587 12:26:29 -- keyring/file.sh@48 -- # waitforlisten 81521 /var/tmp/bperf.sock 00:29:36.587 12:26:29 -- common/autotest_common.sh@817 -- # '[' -z 81521 ']' 00:29:36.587 12:26:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.587 12:26:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:36.587 12:26:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.587 12:26:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:36.587 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:29:36.587 [2024-04-26 12:26:29.912146] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:29:36.587 [2024-04-26 12:26:29.912613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81521 ] 00:29:36.587 [2024-04-26 12:26:30.050320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.846 [2024-04-26 12:26:30.184466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.780 12:26:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:37.780 12:26:30 -- common/autotest_common.sh@850 -- # return 0 00:29:37.780 12:26:30 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:37.780 12:26:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:37.780 12:26:31 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.PS37I4nELP 00:29:37.780 12:26:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.PS37I4nELP 00:29:38.038 12:26:31 -- keyring/file.sh@51 -- # jq -r .path 00:29:38.038 12:26:31 -- keyring/file.sh@51 -- # get_key key0 00:29:38.038 12:26:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.038 12:26:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.038 12:26:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.605 12:26:31 -- keyring/file.sh@51 -- # [[ /tmp/tmp.EtLQlKw4zP == \/\t\m\p\/\t\m\p\.\E\t\L\Q\l\K\w\4\z\P ]] 00:29:38.605 12:26:31 -- keyring/file.sh@52 -- # get_key key1 00:29:38.605 12:26:31 -- keyring/file.sh@52 -- # jq -r .path 00:29:38.605 12:26:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.605 12:26:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.605 12:26:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:38.605 12:26:32 -- keyring/file.sh@52 -- # [[ /tmp/tmp.PS37I4nELP == \/\t\m\p\/\t\m\p\.\P\S\3\7\I\4\n\E\L\P ]] 00:29:38.605 12:26:32 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:38.605 12:26:32 -- keyring/common.sh@12 -- # get_key key0 00:29:38.605 12:26:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.605 12:26:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.605 12:26:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.605 12:26:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.871 12:26:32 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:38.871 12:26:32 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:38.871 12:26:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.871 12:26:32 -- keyring/common.sh@12 -- # get_key key1 00:29:38.871 12:26:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.871 12:26:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.871 12:26:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:39.131 12:26:32 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:39.131 12:26:32 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.131 12:26:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.389 [2024-04-26 12:26:32.758715] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:39.389 nvme0n1 00:29:39.389 12:26:32 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:39.389 12:26:32 -- keyring/common.sh@12 -- # get_key key0 00:29:39.389 12:26:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:39.389 12:26:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.389 12:26:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.389 12:26:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:39.958 12:26:33 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:39.958 12:26:33 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:39.958 12:26:33 -- keyring/common.sh@12 -- # get_key key1 00:29:39.958 12:26:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:39.958 12:26:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.958 12:26:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.958 12:26:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:40.216 12:26:33 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:40.216 12:26:33 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.216 Running I/O for 1 seconds... 00:29:41.148 00:29:41.148 Latency(us) 00:29:41.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.148 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:41.148 nvme0n1 : 1.01 11933.81 46.62 0.00 0.00 10687.83 5779.08 19422.49 00:29:41.148 =================================================================================================================== 00:29:41.148 Total : 11933.81 46.62 0.00 0.00 10687.83 5779.08 19422.49 00:29:41.148 0 00:29:41.148 12:26:34 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:41.148 12:26:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:41.406 12:26:34 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:41.406 12:26:34 -- keyring/common.sh@12 -- # get_key key0 00:29:41.406 12:26:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:41.406 12:26:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:41.406 12:26:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.406 12:26:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:41.664 12:26:35 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:41.664 12:26:35 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:41.664 12:26:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:41.664 12:26:35 -- keyring/common.sh@12 -- # get_key key1 00:29:41.664 12:26:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:41.664 12:26:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.664 12:26:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:41.922 12:26:35 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:41.922 12:26:35 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:41.922 12:26:35 -- common/autotest_common.sh@638 -- # local es=0 00:29:41.922 12:26:35 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:41.922 12:26:35 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:41.922 12:26:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:41.922 12:26:35 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:41.922 12:26:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:41.922 12:26:35 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:41.922 12:26:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:42.181 [2024-04-26 12:26:35.619536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acc150 (107)[2024-04-26 12:26:35.619536] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:42.181 : Transport endpoint is not connected 00:29:42.181 [2024-04-26 12:26:35.620528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acc150 (9): Bad file descriptor 00:29:42.181 request: 00:29:42.181 { 00:29:42.181 "name": "nvme0", 00:29:42.181 "trtype": "tcp", 00:29:42.181 "traddr": "127.0.0.1", 00:29:42.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.181 "adrfam": "ipv4", 00:29:42.181 "trsvcid": "4420", 00:29:42.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.181 "psk": "key1", 00:29:42.181 "method": "bdev_nvme_attach_controller", 00:29:42.181 "req_id": 1 00:29:42.181 } 00:29:42.181 Got JSON-RPC error response 00:29:42.181 response: 00:29:42.181 { 00:29:42.181 "code": -32602, 00:29:42.181 "message": "Invalid parameters" 00:29:42.181 } 00:29:42.181 [2024-04-26 12:26:35.621524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.181 [2024-04-26 12:26:35.621549] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:42.181 [2024-04-26 12:26:35.621560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.181 12:26:35 -- common/autotest_common.sh@641 -- # es=1 00:29:42.181 12:26:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:42.181 12:26:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:42.181 12:26:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:42.181 12:26:35 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:42.181 12:26:35 -- keyring/common.sh@12 -- # get_key key0 00:29:42.181 12:26:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:42.181 12:26:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:42.181 12:26:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.181 12:26:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.439 12:26:35 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:42.697 12:26:35 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:42.697 12:26:35 -- keyring/common.sh@12 -- # get_key key1 00:29:42.697 12:26:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:42.697 12:26:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:42.698 12:26:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:42.698 12:26:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:42.698 12:26:36 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:42.698 12:26:36 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:42.698 12:26:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:43.264 12:26:36 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:43.264 12:26:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:43.264 12:26:36 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:43.264 12:26:36 -- keyring/file.sh@77 -- # jq length 00:29:43.264 12:26:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.522 12:26:36 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:43.522 12:26:36 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.EtLQlKw4zP 00:29:43.522 12:26:36 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:43.522 12:26:36 -- common/autotest_common.sh@638 -- # local es=0 00:29:43.522 12:26:36 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:43.522 12:26:36 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:43.522 12:26:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:43.522 12:26:36 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:43.522 12:26:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:43.522 12:26:36 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:43.522 12:26:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:43.781 [2024-04-26 12:26:37.213915] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EtLQlKw4zP': 0100660 00:29:43.781 [2024-04-26 12:26:37.213976] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:43.781 request: 00:29:43.781 { 00:29:43.781 "name": "key0", 00:29:43.781 "path": "/tmp/tmp.EtLQlKw4zP", 00:29:43.781 "method": "keyring_file_add_key", 00:29:43.781 "req_id": 1 00:29:43.781 } 00:29:43.781 Got JSON-RPC error response 00:29:43.781 response: 00:29:43.781 { 00:29:43.781 "code": -1, 00:29:43.781 "message": "Operation not permitted" 00:29:43.781 } 00:29:43.781 12:26:37 -- common/autotest_common.sh@641 -- # es=1 00:29:43.781 12:26:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:43.781 12:26:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:43.781 12:26:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:43.781 12:26:37 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.EtLQlKw4zP 00:29:43.781 12:26:37 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:43.781 12:26:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EtLQlKw4zP 00:29:44.039 12:26:37 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.EtLQlKw4zP 00:29:44.039 12:26:37 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:44.039 12:26:37 -- keyring/common.sh@12 -- # get_key key0 00:29:44.039 12:26:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:44.039 12:26:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:44.039 12:26:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.039 12:26:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.298 12:26:37 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:44.298 12:26:37 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:44.298 12:26:37 -- common/autotest_common.sh@638 -- # local es=0 00:29:44.298 12:26:37 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:44.298 12:26:37 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:44.298 12:26:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:44.298 12:26:37 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:44.298 12:26:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:44.298 12:26:37 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:44.298 12:26:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:44.556 [2024-04-26 12:26:37.958065] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.EtLQlKw4zP': No such file or directory 00:29:44.556 [2024-04-26 12:26:37.958119] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:44.556 [2024-04-26 12:26:37.958146] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:44.556 [2024-04-26 12:26:37.958155] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:44.556 [2024-04-26 12:26:37.958163] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:44.556 request: 00:29:44.556 { 00:29:44.556 "name": "nvme0", 00:29:44.556 "trtype": "tcp", 00:29:44.556 "traddr": "127.0.0.1", 00:29:44.556 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.556 "adrfam": "ipv4", 00:29:44.556 "trsvcid": "4420", 00:29:44.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.556 "psk": "key0", 00:29:44.556 "method": "bdev_nvme_attach_controller", 00:29:44.556 "req_id": 1 00:29:44.556 } 00:29:44.556 Got JSON-RPC error response 00:29:44.556 response: 00:29:44.556 { 00:29:44.556 "code": -19, 00:29:44.556 "message": "No such device" 00:29:44.557 } 00:29:44.557 12:26:37 -- common/autotest_common.sh@641 -- # es=1 00:29:44.557 12:26:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:44.557 12:26:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:44.557 12:26:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:44.557 12:26:37 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:44.557 12:26:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:44.814 12:26:38 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:44.814 12:26:38 -- keyring/common.sh@15 -- # local name key digest path 00:29:44.814 12:26:38 -- keyring/common.sh@17 -- # name=key0 00:29:44.814 12:26:38 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:44.814 12:26:38 -- keyring/common.sh@17 -- # digest=0 00:29:44.814 12:26:38 -- keyring/common.sh@18 -- # mktemp 00:29:44.814 12:26:38 -- keyring/common.sh@18 -- # path=/tmp/tmp.iRBpydTMP6 00:29:44.814 12:26:38 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:44.814 12:26:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:44.814 12:26:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:44.814 12:26:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:44.814 12:26:38 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:44.814 12:26:38 -- nvmf/common.sh@693 -- # digest=0 00:29:44.814 12:26:38 -- nvmf/common.sh@694 -- # python - 00:29:45.072 12:26:38 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iRBpydTMP6 00:29:45.073 12:26:38 -- keyring/common.sh@23 -- # echo /tmp/tmp.iRBpydTMP6 00:29:45.073 12:26:38 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.iRBpydTMP6 00:29:45.073 12:26:38 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iRBpydTMP6 00:29:45.073 12:26:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iRBpydTMP6 00:29:45.331 12:26:38 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:45.331 12:26:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:45.589 nvme0n1 00:29:45.589 12:26:38 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:45.589 12:26:38 -- keyring/common.sh@12 -- # get_key key0 00:29:45.589 12:26:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:45.589 12:26:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:45.589 12:26:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:45.589 12:26:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.847 12:26:39 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:45.847 12:26:39 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:45.847 12:26:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:46.106 12:26:39 -- keyring/file.sh@101 -- # jq -r .removed 00:29:46.106 12:26:39 -- keyring/file.sh@101 -- # get_key key0 00:29:46.106 12:26:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.106 12:26:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.106 12:26:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.365 12:26:39 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:46.365 12:26:39 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:46.365 12:26:39 -- keyring/common.sh@12 -- # get_key key0 00:29:46.365 12:26:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.365 12:26:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.365 12:26:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.365 12:26:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.623 12:26:39 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:46.623 12:26:39 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:46.623 12:26:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:46.881 12:26:40 -- keyring/file.sh@104 -- # jq length 00:29:46.881 12:26:40 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:46.881 12:26:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.139 12:26:40 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:47.139 12:26:40 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iRBpydTMP6 00:29:47.139 12:26:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iRBpydTMP6 00:29:47.397 12:26:40 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.PS37I4nELP 00:29:47.397 12:26:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.PS37I4nELP 00:29:47.718 12:26:41 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.718 12:26:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.976 nvme0n1 00:29:47.976 12:26:41 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:47.976 12:26:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:48.237 12:26:41 -- keyring/file.sh@112 -- # config='{ 00:29:48.237 "subsystems": [ 00:29:48.237 { 00:29:48.237 "subsystem": "keyring", 00:29:48.237 "config": [ 00:29:48.237 { 00:29:48.237 "method": "keyring_file_add_key", 00:29:48.237 "params": { 00:29:48.237 "name": "key0", 00:29:48.237 "path": "/tmp/tmp.iRBpydTMP6" 00:29:48.237 } 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "method": "keyring_file_add_key", 00:29:48.237 "params": { 00:29:48.237 "name": "key1", 00:29:48.237 "path": "/tmp/tmp.PS37I4nELP" 00:29:48.237 } 00:29:48.237 } 00:29:48.237 ] 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "subsystem": "iobuf", 00:29:48.237 "config": [ 00:29:48.237 { 00:29:48.237 "method": "iobuf_set_options", 00:29:48.237 "params": { 00:29:48.237 "small_pool_count": 8192, 00:29:48.237 "large_pool_count": 1024, 00:29:48.237 "small_bufsize": 8192, 00:29:48.237 "large_bufsize": 135168 00:29:48.237 } 00:29:48.237 } 00:29:48.237 ] 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "subsystem": "sock", 00:29:48.237 "config": [ 00:29:48.237 { 00:29:48.237 "method": "sock_impl_set_options", 00:29:48.237 "params": { 00:29:48.237 "impl_name": "uring", 00:29:48.237 "recv_buf_size": 2097152, 00:29:48.237 "send_buf_size": 2097152, 00:29:48.237 "enable_recv_pipe": true, 00:29:48.237 "enable_quickack": false, 00:29:48.237 "enable_placement_id": 0, 00:29:48.237 "enable_zerocopy_send_server": false, 00:29:48.237 "enable_zerocopy_send_client": false, 00:29:48.237 "zerocopy_threshold": 0, 00:29:48.237 "tls_version": 0, 00:29:48.237 "enable_ktls": false 00:29:48.237 } 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "method": "sock_impl_set_options", 00:29:48.237 "params": { 00:29:48.237 "impl_name": "posix", 00:29:48.237 "recv_buf_size": 2097152, 00:29:48.237 "send_buf_size": 2097152, 00:29:48.237 "enable_recv_pipe": true, 00:29:48.237 "enable_quickack": false, 00:29:48.237 "enable_placement_id": 0, 00:29:48.237 "enable_zerocopy_send_server": true, 00:29:48.237 "enable_zerocopy_send_client": false, 00:29:48.237 "zerocopy_threshold": 0, 00:29:48.237 "tls_version": 0, 00:29:48.237 "enable_ktls": false 00:29:48.237 } 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "method": "sock_impl_set_options", 00:29:48.237 "params": { 00:29:48.237 "impl_name": "ssl", 00:29:48.237 "recv_buf_size": 4096, 00:29:48.237 "send_buf_size": 4096, 00:29:48.237 "enable_recv_pipe": true, 00:29:48.237 "enable_quickack": false, 00:29:48.237 "enable_placement_id": 0, 00:29:48.237 "enable_zerocopy_send_server": true, 00:29:48.237 "enable_zerocopy_send_client": false, 00:29:48.237 "zerocopy_threshold": 0, 00:29:48.237 "tls_version": 0, 00:29:48.237 "enable_ktls": false 00:29:48.237 } 00:29:48.237 } 00:29:48.237 ] 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "subsystem": "vmd", 00:29:48.237 "config": [] 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "subsystem": "accel", 00:29:48.237 "config": [ 00:29:48.237 { 00:29:48.237 "method": "accel_set_options", 00:29:48.237 "params": { 00:29:48.237 "small_cache_size": 128, 00:29:48.237 "large_cache_size": 16, 00:29:48.237 "task_count": 2048, 00:29:48.237 "sequence_count": 2048, 00:29:48.237 "buf_count": 2048 00:29:48.237 } 00:29:48.237 } 00:29:48.237 ] 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "subsystem": "bdev", 00:29:48.238 "config": [ 00:29:48.238 { 00:29:48.238 "method": "bdev_set_options", 00:29:48.238 "params": { 00:29:48.238 "bdev_io_pool_size": 65535, 00:29:48.238 "bdev_io_cache_size": 256, 00:29:48.238 "bdev_auto_examine": true, 00:29:48.238 "iobuf_small_cache_size": 128, 00:29:48.238 "iobuf_large_cache_size": 16 00:29:48.238 } 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "method": "bdev_raid_set_options", 00:29:48.238 "params": { 00:29:48.238 "process_window_size_kb": 1024 00:29:48.238 } 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "method": "bdev_iscsi_set_options", 00:29:48.238 "params": { 00:29:48.238 "timeout_sec": 30 00:29:48.238 } 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "method": "bdev_nvme_set_options", 00:29:48.238 "params": { 00:29:48.238 "action_on_timeout": "none", 00:29:48.238 "timeout_us": 0, 00:29:48.238 "timeout_admin_us": 0, 00:29:48.238 "keep_alive_timeout_ms": 10000, 00:29:48.238 "arbitration_burst": 0, 00:29:48.238 "low_priority_weight": 0, 00:29:48.238 "medium_priority_weight": 0, 00:29:48.238 "high_priority_weight": 0, 00:29:48.238 "nvme_adminq_poll_period_us": 10000, 00:29:48.238 "nvme_ioq_poll_period_us": 0, 00:29:48.238 "io_queue_requests": 512, 00:29:48.238 "delay_cmd_submit": true, 00:29:48.238 "transport_retry_count": 4, 00:29:48.238 "bdev_retry_count": 3, 00:29:48.238 "transport_ack_timeout": 0, 00:29:48.238 "ctrlr_loss_timeout_sec": 0, 00:29:48.238 "reconnect_delay_sec": 0, 00:29:48.238 "fast_io_fail_timeout_sec": 0, 00:29:48.238 "disable_auto_failback": false, 00:29:48.238 "generate_uuids": false, 00:29:48.238 "transport_tos": 0, 00:29:48.238 "nvme_error_stat": false, 00:29:48.238 "rdma_srq_size": 0, 00:29:48.238 "io_path_stat": false, 00:29:48.238 "allow_accel_sequence": false, 00:29:48.238 "rdma_max_cq_size": 0, 00:29:48.238 "rdma_cm_event_timeout_ms": 0, 00:29:48.238 "dhchap_digests": [ 00:29:48.238 "sha256", 00:29:48.238 "sha384", 00:29:48.238 "sha512" 00:29:48.238 ], 00:29:48.238 "dhchap_dhgroups": [ 00:29:48.238 "null", 00:29:48.238 "ffdhe2048", 00:29:48.238 "ffdhe3072", 00:29:48.238 "ffdhe4096", 00:29:48.238 "ffdhe6144", 00:29:48.238 "ffdhe8192" 00:29:48.238 ] 00:29:48.238 } 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "method": "bdev_nvme_attach_controller", 00:29:48.238 "params": { 00:29:48.238 "name": "nvme0", 00:29:48.238 "trtype": "TCP", 00:29:48.238 "adrfam": "IPv4", 00:29:48.238 "traddr": "127.0.0.1", 00:29:48.238 "trsvcid": "4420", 00:29:48.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.238 "prchk_reftag": false, 00:29:48.238 "prchk_guard": false, 00:29:48.238 "ctrlr_loss_timeout_sec": 0, 00:29:48.238 "reconnect_delay_sec": 0, 00:29:48.238 "fast_io_fail_timeout_sec": 0, 00:29:48.238 "psk": "key0", 00:29:48.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.238 "hdgst": false, 00:29:48.238 "ddgst": false 00:29:48.238 } 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "method": "bdev_nvme_set_hotplug", 00:29:48.238 "params": { 00:29:48.238 "period_us": 100000, 00:29:48.238 "enable": false 00:29:48.238 } 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "method": "bdev_wait_for_examine" 00:29:48.238 } 00:29:48.238 ] 00:29:48.238 }, 00:29:48.238 { 00:29:48.238 "subsystem": "nbd", 00:29:48.238 "config": [] 00:29:48.238 } 00:29:48.238 ] 00:29:48.238 }' 00:29:48.238 12:26:41 -- keyring/file.sh@114 -- # killprocess 81521 00:29:48.238 12:26:41 -- common/autotest_common.sh@936 -- # '[' -z 81521 ']' 00:29:48.238 12:26:41 -- common/autotest_common.sh@940 -- # kill -0 81521 00:29:48.238 12:26:41 -- common/autotest_common.sh@941 -- # uname 00:29:48.238 12:26:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:48.238 12:26:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81521 00:29:48.238 killing process with pid 81521 00:29:48.238 Received shutdown signal, test time was about 1.000000 seconds 00:29:48.238 00:29:48.238 Latency(us) 00:29:48.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.238 =================================================================================================================== 00:29:48.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.238 12:26:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:48.238 12:26:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:48.238 12:26:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81521' 00:29:48.238 12:26:41 -- common/autotest_common.sh@955 -- # kill 81521 00:29:48.238 12:26:41 -- common/autotest_common.sh@960 -- # wait 81521 00:29:48.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:48.497 12:26:41 -- keyring/file.sh@117 -- # bperfpid=81776 00:29:48.497 12:26:41 -- keyring/file.sh@119 -- # waitforlisten 81776 /var/tmp/bperf.sock 00:29:48.497 12:26:41 -- common/autotest_common.sh@817 -- # '[' -z 81776 ']' 00:29:48.497 12:26:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:48.497 12:26:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:48.497 12:26:41 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:48.497 12:26:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:48.497 12:26:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:48.497 12:26:41 -- keyring/file.sh@115 -- # echo '{ 00:29:48.497 "subsystems": [ 00:29:48.497 { 00:29:48.497 "subsystem": "keyring", 00:29:48.497 "config": [ 00:29:48.497 { 00:29:48.497 "method": "keyring_file_add_key", 00:29:48.497 "params": { 00:29:48.497 "name": "key0", 00:29:48.497 "path": "/tmp/tmp.iRBpydTMP6" 00:29:48.497 } 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "method": "keyring_file_add_key", 00:29:48.497 "params": { 00:29:48.497 "name": "key1", 00:29:48.497 "path": "/tmp/tmp.PS37I4nELP" 00:29:48.497 } 00:29:48.497 } 00:29:48.497 ] 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "subsystem": "iobuf", 00:29:48.497 "config": [ 00:29:48.497 { 00:29:48.497 "method": "iobuf_set_options", 00:29:48.497 "params": { 00:29:48.497 "small_pool_count": 8192, 00:29:48.497 "large_pool_count": 1024, 00:29:48.497 "small_bufsize": 8192, 00:29:48.497 "large_bufsize": 135168 00:29:48.497 } 00:29:48.497 } 00:29:48.497 ] 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "subsystem": "sock", 00:29:48.497 "config": [ 00:29:48.497 { 00:29:48.497 "method": "sock_impl_set_options", 00:29:48.497 "params": { 00:29:48.497 "impl_name": "uring", 00:29:48.497 "recv_buf_size": 2097152, 00:29:48.497 "send_buf_size": 2097152, 00:29:48.497 "enable_recv_pipe": true, 00:29:48.497 "enable_quickack": false, 00:29:48.497 "enable_placement_id": 0, 00:29:48.497 "enable_zerocopy_send_server": false, 00:29:48.497 "enable_zerocopy_send_client": false, 00:29:48.497 "zerocopy_threshold": 0, 00:29:48.497 "tls_version": 0, 00:29:48.497 "enable_ktls": false 00:29:48.497 } 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "method": "sock_impl_set_options", 00:29:48.497 "params": { 00:29:48.497 "impl_name": "posix", 00:29:48.497 "recv_buf_size": 2097152, 00:29:48.497 "send_buf_size": 2097152, 00:29:48.497 "enable_recv_pipe": true, 00:29:48.497 "enable_quickack": false, 00:29:48.497 "enable_placement_id": 0, 00:29:48.497 "enable_zerocopy_send_server": true, 00:29:48.497 "enable_zerocopy_send_client": false, 00:29:48.497 "zerocopy_threshold": 0, 00:29:48.497 "tls_version": 0, 00:29:48.497 "enable_ktls": false 00:29:48.497 } 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "method": "sock_impl_set_options", 00:29:48.497 "params": { 00:29:48.497 "impl_name": "ssl", 00:29:48.497 "recv_buf_size": 4096, 00:29:48.497 "send_buf_size": 4096, 00:29:48.497 "enable_recv_pipe": true, 00:29:48.497 "enable_quickack": false, 00:29:48.497 "enable_placement_id": 0, 00:29:48.497 "enable_zerocopy_send_server": true, 00:29:48.497 "enable_zerocopy_send_client": false, 00:29:48.497 "zerocopy_threshold": 0, 00:29:48.497 "tls_version": 0, 00:29:48.497 "enable_ktls": false 00:29:48.497 } 00:29:48.497 } 00:29:48.497 ] 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "subsystem": "vmd", 00:29:48.497 "config": [] 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "subsystem": "accel", 00:29:48.497 "config": [ 00:29:48.497 { 00:29:48.497 "method": "accel_set_options", 00:29:48.497 "params": { 00:29:48.497 "small_cache_size": 128, 00:29:48.497 "large_cache_size": 16, 00:29:48.497 "task_count": 2048, 00:29:48.497 "sequence_count": 2048, 00:29:48.497 "buf_count": 2048 00:29:48.497 } 00:29:48.497 } 00:29:48.497 ] 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "subsystem": "bdev", 00:29:48.497 "config": [ 00:29:48.497 { 00:29:48.497 "method": "bdev_set_options", 00:29:48.497 "params": { 00:29:48.497 "bdev_io_pool_size": 65535, 00:29:48.497 "bdev_io_cache_size": 256, 00:29:48.497 "bdev_auto_examine": true, 00:29:48.497 "iobuf_small_cache_size": 128, 00:29:48.497 "iobuf_large_cache_size": 16 00:29:48.497 } 00:29:48.497 }, 00:29:48.497 { 00:29:48.497 "method": "bdev_raid_set_options", 00:29:48.498 "params": { 00:29:48.498 "process_window_size_kb": 1024 00:29:48.498 } 00:29:48.498 }, 00:29:48.498 { 00:29:48.498 "method": "bdev_iscsi_set_options", 00:29:48.498 "params": { 00:29:48.498 "timeout_sec": 30 00:29:48.498 } 00:29:48.498 }, 00:29:48.498 { 00:29:48.498 "method": "bdev_nvme_set_options", 00:29:48.498 "params": { 00:29:48.498 "action_on_timeout": "none", 00:29:48.498 "timeout_us": 0, 00:29:48.498 "timeout_admin_us": 0, 00:29:48.498 "keep_alive_timeout_ms": 10000, 00:29:48.498 "arbitration_burst": 0, 00:29:48.498 "low_priority_weight": 0, 00:29:48.498 "medium_priority_weight": 0, 00:29:48.498 "high_priority_weight": 0, 00:29:48.498 "nvme_adminq_poll_period_us": 10000, 00:29:48.498 "nvme_ioq_poll_period_us": 0, 00:29:48.498 "io_queue_requests": 512, 00:29:48.498 "delay_cmd_submit": true, 00:29:48.498 "transport_retry_count": 4, 00:29:48.498 "bdev_retry_count": 3, 00:29:48.498 "transport_ack_timeout": 0, 00:29:48.498 "c 12:26:41 -- common/autotest_common.sh@10 -- # set +x 00:29:48.498 trlr_loss_timeout_sec": 0, 00:29:48.498 "reconnect_delay_sec": 0, 00:29:48.498 "fast_io_fail_timeout_sec": 0, 00:29:48.498 "disable_auto_failback": false, 00:29:48.498 "generate_uuids": false, 00:29:48.498 "transport_tos": 0, 00:29:48.498 "nvme_error_stat": false, 00:29:48.498 "rdma_srq_size": 0, 00:29:48.498 "io_path_stat": false, 00:29:48.498 "allow_accel_sequence": false, 00:29:48.498 "rdma_max_cq_size": 0, 00:29:48.498 "rdma_cm_event_timeout_ms": 0, 00:29:48.498 "dhchap_digests": [ 00:29:48.498 "sha256", 00:29:48.498 "sha384", 00:29:48.498 "sha512" 00:29:48.498 ], 00:29:48.498 "dhchap_dhgroups": [ 00:29:48.498 "null", 00:29:48.498 "ffdhe2048", 00:29:48.498 "ffdhe3072", 00:29:48.498 "ffdhe4096", 00:29:48.498 "ffdhe6144", 00:29:48.498 "ffdhe8192" 00:29:48.498 ] 00:29:48.498 } 00:29:48.498 }, 00:29:48.498 { 00:29:48.498 "method": "bdev_nvme_attach_controller", 00:29:48.498 "params": { 00:29:48.498 "name": "nvme0", 00:29:48.498 "trtype": "TCP", 00:29:48.498 "adrfam": "IPv4", 00:29:48.498 "traddr": "127.0.0.1", 00:29:48.498 "trsvcid": "4420", 00:29:48.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.498 "prchk_reftag": false, 00:29:48.498 "prchk_guard": false, 00:29:48.498 "ctrlr_loss_timeout_sec": 0, 00:29:48.498 "reconnect_delay_sec": 0, 00:29:48.498 "fast_io_fail_timeout_sec": 0, 00:29:48.498 "psk": "key0", 00:29:48.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.498 "hdgst": false, 00:29:48.498 "ddgst": false 00:29:48.498 } 00:29:48.498 }, 00:29:48.498 { 00:29:48.498 "method": "bdev_nvme_set_hotplug", 00:29:48.498 "params": { 00:29:48.498 "period_us": 100000, 00:29:48.498 "enable": false 00:29:48.498 } 00:29:48.498 }, 00:29:48.498 { 00:29:48.498 "method": "bdev_wait_for_examine" 00:29:48.498 } 00:29:48.498 ] 00:29:48.498 }, 00:29:48.498 { 00:29:48.498 "subsystem": "nbd", 00:29:48.498 "config": [] 00:29:48.498 } 00:29:48.498 ] 00:29:48.498 }' 00:29:48.757 [2024-04-26 12:26:41.974262] Starting SPDK v24.05-pre git sha1 e29339c01 / DPDK 23.11.0 initialization... 00:29:48.757 [2024-04-26 12:26:41.974696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81776 ] 00:29:48.757 [2024-04-26 12:26:42.113616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.015 [2024-04-26 12:26:42.230977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.015 [2024-04-26 12:26:42.413820] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:49.579 12:26:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:49.579 12:26:43 -- common/autotest_common.sh@850 -- # return 0 00:29:49.580 12:26:43 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:49.580 12:26:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.580 12:26:43 -- keyring/file.sh@120 -- # jq length 00:29:49.837 12:26:43 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:49.837 12:26:43 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:49.837 12:26:43 -- keyring/common.sh@12 -- # get_key key0 00:29:49.837 12:26:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:49.837 12:26:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:49.837 12:26:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:49.837 12:26:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.094 12:26:43 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:50.094 12:26:43 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:50.094 12:26:43 -- keyring/common.sh@12 -- # get_key key1 00:29:50.094 12:26:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.094 12:26:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.094 12:26:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:50.094 12:26:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.351 12:26:43 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:50.351 12:26:43 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:50.351 12:26:43 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:50.351 12:26:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:50.609 12:26:44 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:50.609 12:26:44 -- keyring/file.sh@1 -- # cleanup 00:29:50.609 12:26:44 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.iRBpydTMP6 /tmp/tmp.PS37I4nELP 00:29:50.609 12:26:44 -- keyring/file.sh@20 -- # killprocess 81776 00:29:50.609 12:26:44 -- common/autotest_common.sh@936 -- # '[' -z 81776 ']' 00:29:50.609 12:26:44 -- common/autotest_common.sh@940 -- # kill -0 81776 00:29:50.609 12:26:44 -- common/autotest_common.sh@941 -- # uname 00:29:50.609 12:26:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:50.609 12:26:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81776 00:29:50.609 killing process with pid 81776 00:29:50.609 Received shutdown signal, test time was about 1.000000 seconds 00:29:50.609 00:29:50.609 Latency(us) 00:29:50.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.609 =================================================================================================================== 00:29:50.609 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:50.609 12:26:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:50.609 12:26:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:50.609 12:26:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81776' 00:29:50.609 12:26:44 -- common/autotest_common.sh@955 -- # kill 81776 00:29:50.609 12:26:44 -- common/autotest_common.sh@960 -- # wait 81776 00:29:50.867 12:26:44 -- keyring/file.sh@21 -- # killprocess 81504 00:29:50.867 12:26:44 -- common/autotest_common.sh@936 -- # '[' -z 81504 ']' 00:29:50.867 12:26:44 -- common/autotest_common.sh@940 -- # kill -0 81504 00:29:50.867 12:26:44 -- common/autotest_common.sh@941 -- # uname 00:29:50.867 12:26:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:50.867 12:26:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81504 00:29:50.867 killing process with pid 81504 00:29:50.867 12:26:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:50.867 12:26:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:50.867 12:26:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81504' 00:29:50.867 12:26:44 -- common/autotest_common.sh@955 -- # kill 81504 00:29:50.867 [2024-04-26 12:26:44.334805] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:50.867 12:26:44 -- common/autotest_common.sh@960 -- # wait 81504 00:29:51.433 00:29:51.433 real 0m16.298s 00:29:51.433 user 0m40.581s 00:29:51.433 sys 0m3.147s 00:29:51.433 12:26:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:51.433 ************************************ 00:29:51.433 END TEST keyring_file 00:29:51.433 ************************************ 00:29:51.433 12:26:44 -- common/autotest_common.sh@10 -- # set +x 00:29:51.433 12:26:44 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:51.433 12:26:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:51.433 12:26:44 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:51.433 12:26:44 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:51.433 12:26:44 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:51.433 12:26:44 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:51.433 12:26:44 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:51.433 12:26:44 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:51.433 12:26:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:51.433 12:26:44 -- common/autotest_common.sh@10 -- # set +x 00:29:51.433 12:26:44 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:51.433 12:26:44 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:51.433 12:26:44 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:51.433 12:26:44 -- common/autotest_common.sh@10 -- # set +x 00:29:53.333 INFO: APP EXITING 00:29:53.333 INFO: killing all VMs 00:29:53.333 INFO: killing vhost app 00:29:53.333 INFO: EXIT DONE 00:29:53.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:53.898 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:53.898 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:54.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:54.464 Cleaning 00:29:54.464 Removing: /var/run/dpdk/spdk0/config 00:29:54.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:54.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:54.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:54.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:54.464 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:54.464 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:54.464 Removing: /var/run/dpdk/spdk1/config 00:29:54.464 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:54.464 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:54.464 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:54.464 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:54.464 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:54.464 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:54.464 Removing: /var/run/dpdk/spdk2/config 00:29:54.464 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:54.464 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:54.464 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:54.464 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:54.464 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:54.464 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:54.464 Removing: /var/run/dpdk/spdk3/config 00:29:54.464 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:54.464 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:54.464 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:54.464 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:54.723 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:54.723 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:54.723 Removing: /var/run/dpdk/spdk4/config 00:29:54.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:54.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:54.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:54.723 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:54.723 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:54.723 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:54.723 Removing: /dev/shm/nvmf_trace.0 00:29:54.723 Removing: /dev/shm/spdk_tgt_trace.pid58271 00:29:54.723 Removing: /var/run/dpdk/spdk0 00:29:54.723 Removing: /var/run/dpdk/spdk1 00:29:54.723 Removing: /var/run/dpdk/spdk2 00:29:54.723 Removing: /var/run/dpdk/spdk3 00:29:54.723 Removing: /var/run/dpdk/spdk4 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58096 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58271 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58495 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58584 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58613 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58738 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58756 00:29:54.723 Removing: /var/run/dpdk/spdk_pid58884 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59080 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59226 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59302 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59385 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59485 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59566 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59608 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59649 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59716 00:29:54.723 Removing: /var/run/dpdk/spdk_pid59813 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60260 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60316 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60371 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60387 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60459 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60476 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60552 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60568 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60623 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60641 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60692 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60710 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60842 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60887 00:29:54.723 Removing: /var/run/dpdk/spdk_pid60961 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61026 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61060 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61138 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61172 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61216 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61260 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61293 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61337 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61376 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61414 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61459 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61497 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61538 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61578 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61616 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61656 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61700 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61738 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61778 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61825 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61862 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61906 00:29:54.723 Removing: /var/run/dpdk/spdk_pid61945 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62015 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62118 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62442 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62469 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62504 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62523 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62539 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62563 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62577 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62592 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62617 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62630 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62651 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62671 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62690 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62706 00:29:54.723 Removing: /var/run/dpdk/spdk_pid62729 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62744 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62760 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62784 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62797 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62813 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62854 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62873 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62902 00:29:54.983 Removing: /var/run/dpdk/spdk_pid62974 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63008 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63023 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63062 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63066 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63079 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63125 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63139 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63178 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63183 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63197 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63207 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63216 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63231 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63241 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63250 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63288 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63319 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63328 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63366 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63376 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63383 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63433 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63445 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63482 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63489 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63497 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63510 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63517 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63525 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63538 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63540 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63623 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63676 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63796 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63841 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63889 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63904 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63920 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63940 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63977 00:29:54.983 Removing: /var/run/dpdk/spdk_pid63997 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64077 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64099 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64143 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64216 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64277 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64311 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64415 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64468 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64510 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64769 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64884 00:29:54.983 Removing: /var/run/dpdk/spdk_pid64916 00:29:54.983 Removing: /var/run/dpdk/spdk_pid65250 00:29:54.983 Removing: /var/run/dpdk/spdk_pid65288 00:29:54.983 Removing: /var/run/dpdk/spdk_pid65600 00:29:54.983 Removing: /var/run/dpdk/spdk_pid66023 00:29:54.983 Removing: /var/run/dpdk/spdk_pid66295 00:29:54.983 Removing: /var/run/dpdk/spdk_pid67079 00:29:54.983 Removing: /var/run/dpdk/spdk_pid67916 00:29:54.983 Removing: /var/run/dpdk/spdk_pid68028 00:29:54.983 Removing: /var/run/dpdk/spdk_pid68101 00:29:54.983 Removing: /var/run/dpdk/spdk_pid69379 00:29:54.983 Removing: /var/run/dpdk/spdk_pid69601 00:29:54.983 Removing: /var/run/dpdk/spdk_pid69910 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70024 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70158 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70185 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70213 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70242 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70335 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70470 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70618 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70700 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70892 00:29:54.983 Removing: /var/run/dpdk/spdk_pid70975 00:29:54.983 Removing: /var/run/dpdk/spdk_pid71073 00:29:54.983 Removing: /var/run/dpdk/spdk_pid71380 00:29:54.983 Removing: /var/run/dpdk/spdk_pid71769 00:29:54.983 Removing: /var/run/dpdk/spdk_pid71771 00:29:54.983 Removing: /var/run/dpdk/spdk_pid72050 00:29:54.983 Removing: /var/run/dpdk/spdk_pid72074 00:29:54.983 Removing: /var/run/dpdk/spdk_pid72089 00:29:54.983 Removing: /var/run/dpdk/spdk_pid72114 00:29:54.983 Removing: /var/run/dpdk/spdk_pid72125 00:29:54.983 Removing: /var/run/dpdk/spdk_pid72407 00:29:55.242 Removing: /var/run/dpdk/spdk_pid72456 00:29:55.242 Removing: /var/run/dpdk/spdk_pid72736 00:29:55.242 Removing: /var/run/dpdk/spdk_pid72932 00:29:55.242 Removing: /var/run/dpdk/spdk_pid73314 00:29:55.242 Removing: /var/run/dpdk/spdk_pid73805 00:29:55.242 Removing: /var/run/dpdk/spdk_pid74408 00:29:55.242 Removing: /var/run/dpdk/spdk_pid74415 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76353 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76419 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76479 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76534 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76663 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76725 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76785 00:29:55.242 Removing: /var/run/dpdk/spdk_pid76844 00:29:55.242 Removing: /var/run/dpdk/spdk_pid77170 00:29:55.242 Removing: /var/run/dpdk/spdk_pid78349 00:29:55.242 Removing: /var/run/dpdk/spdk_pid78492 00:29:55.242 Removing: /var/run/dpdk/spdk_pid78735 00:29:55.242 Removing: /var/run/dpdk/spdk_pid79301 00:29:55.242 Removing: /var/run/dpdk/spdk_pid79465 00:29:55.242 Removing: /var/run/dpdk/spdk_pid79631 00:29:55.242 Removing: /var/run/dpdk/spdk_pid79728 00:29:55.242 Removing: /var/run/dpdk/spdk_pid79893 00:29:55.242 Removing: /var/run/dpdk/spdk_pid80007 00:29:55.242 Removing: /var/run/dpdk/spdk_pid80676 00:29:55.242 Removing: /var/run/dpdk/spdk_pid80711 00:29:55.242 Removing: /var/run/dpdk/spdk_pid80746 00:29:55.242 Removing: /var/run/dpdk/spdk_pid81004 00:29:55.242 Removing: /var/run/dpdk/spdk_pid81039 00:29:55.242 Removing: /var/run/dpdk/spdk_pid81070 00:29:55.242 Removing: /var/run/dpdk/spdk_pid81504 00:29:55.242 Removing: /var/run/dpdk/spdk_pid81521 00:29:55.242 Removing: /var/run/dpdk/spdk_pid81776 00:29:55.242 Clean 00:29:55.242 12:26:48 -- common/autotest_common.sh@1437 -- # return 0 00:29:55.242 12:26:48 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:55.242 12:26:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:55.242 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:29:55.501 12:26:48 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:55.501 12:26:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:55.501 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:29:55.501 12:26:48 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:55.501 12:26:48 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:55.501 12:26:48 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:55.501 12:26:48 -- spdk/autotest.sh@389 -- # hash lcov 00:29:55.501 12:26:48 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:55.501 12:26:48 -- spdk/autotest.sh@391 -- # hostname 00:29:55.501 12:26:48 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:55.759 geninfo: WARNING: invalid characters removed from testname! 00:30:22.327 12:27:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:25.612 12:27:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:28.899 12:27:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:31.431 12:27:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:34.003 12:27:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:36.533 12:27:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:39.116 12:27:32 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:39.116 12:27:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:39.116 12:27:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:39.116 12:27:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.116 12:27:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.116 12:27:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.116 12:27:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.116 12:27:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.116 12:27:32 -- paths/export.sh@5 -- $ export PATH 00:30:39.116 12:27:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.116 12:27:32 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:39.116 12:27:32 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:39.116 12:27:32 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714134452.XXXXXX 00:30:39.116 12:27:32 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714134452.iPPip7 00:30:39.116 12:27:32 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:39.116 12:27:32 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:39.116 12:27:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:39.116 12:27:32 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:39.116 12:27:32 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:39.116 12:27:32 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:39.116 12:27:32 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:39.116 12:27:32 -- common/autotest_common.sh@10 -- $ set +x 00:30:39.116 12:27:32 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:30:39.116 12:27:32 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:39.116 12:27:32 -- pm/common@17 -- $ local monitor 00:30:39.116 12:27:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:39.116 12:27:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=83484 00:30:39.116 12:27:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:39.116 12:27:32 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=83486 00:30:39.116 12:27:32 -- pm/common@21 -- $ date +%s 00:30:39.116 12:27:32 -- pm/common@26 -- $ sleep 1 00:30:39.116 12:27:32 -- pm/common@21 -- $ date +%s 00:30:39.116 12:27:32 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714134452 00:30:39.116 12:27:32 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714134452 00:30:39.374 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714134452_collect-vmstat.pm.log 00:30:39.374 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714134452_collect-cpu-load.pm.log 00:30:40.309 12:27:33 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:40.309 12:27:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:40.309 12:27:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:40.309 12:27:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:40.309 12:27:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:40.309 12:27:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:40.309 12:27:33 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:40.309 12:27:33 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:40.309 12:27:33 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:40.309 12:27:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:40.309 12:27:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:40.309 12:27:33 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:40.309 12:27:33 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:40.309 12:27:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:40.309 12:27:33 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:40.309 12:27:33 -- pm/common@45 -- $ pid=83492 00:30:40.309 12:27:33 -- pm/common@52 -- $ sudo kill -TERM 83492 00:30:40.309 12:27:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:40.309 12:27:33 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:40.309 12:27:33 -- pm/common@45 -- $ pid=83491 00:30:40.309 12:27:33 -- pm/common@52 -- $ sudo kill -TERM 83491 00:30:40.309 + [[ -n 5166 ]] 00:30:40.309 + sudo kill 5166 00:30:40.320 [Pipeline] } 00:30:40.342 [Pipeline] // timeout 00:30:40.349 [Pipeline] } 00:30:40.367 [Pipeline] // stage 00:30:40.374 [Pipeline] } 00:30:40.391 [Pipeline] // catchError 00:30:40.401 [Pipeline] stage 00:30:40.403 [Pipeline] { (Stop VM) 00:30:40.417 [Pipeline] sh 00:30:40.695 + vagrant halt 00:30:44.883 ==> default: Halting domain... 00:30:50.215 [Pipeline] sh 00:30:50.494 + vagrant destroy -f 00:30:54.744 ==> default: Removing domain... 00:30:54.758 [Pipeline] sh 00:30:55.038 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:55.052 [Pipeline] } 00:30:55.072 [Pipeline] // stage 00:30:55.077 [Pipeline] } 00:30:55.096 [Pipeline] // dir 00:30:55.102 [Pipeline] } 00:30:55.121 [Pipeline] // wrap 00:30:55.128 [Pipeline] } 00:30:55.146 [Pipeline] // catchError 00:30:55.155 [Pipeline] stage 00:30:55.158 [Pipeline] { (Epilogue) 00:30:55.175 [Pipeline] sh 00:30:55.458 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:02.032 [Pipeline] catchError 00:31:02.034 [Pipeline] { 00:31:02.050 [Pipeline] sh 00:31:02.332 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:02.612 Artifacts sizes are good 00:31:02.632 [Pipeline] } 00:31:02.644 [Pipeline] // catchError 00:31:02.652 [Pipeline] archiveArtifacts 00:31:02.657 Archiving artifacts 00:31:02.832 [Pipeline] cleanWs 00:31:02.843 [WS-CLEANUP] Deleting project workspace... 00:31:02.843 [WS-CLEANUP] Deferred wipeout is used... 00:31:02.847 [WS-CLEANUP] done 00:31:02.849 [Pipeline] } 00:31:02.865 [Pipeline] // stage 00:31:02.872 [Pipeline] } 00:31:02.885 [Pipeline] // node 00:31:02.890 [Pipeline] End of Pipeline 00:31:02.919 Finished: SUCCESS